In the field of quantum physics, various claims are made concerning the detection of particles or waves in the famous double-slit experiment. Many respected commentators have put forth the idea that the mere process of ‘detection’ (undefined) is sufficient to affect the outcome of the experiment. The concept has spawned the idea that it may be human consciousness that is doing the ‘detection’ and that therefore such consciousness is able to have causal effects in the physical world.
Quite aside from the fact that the logic here is quite laughable and that there is no scientific definition of consciousness anyhow, it seems that there is no credible empirical evidence whatsoever for this effect.
A search for empirical evidence for the phenomenon of ‘detection’ included:
Asking an AI engine to provide evidence
Reading the experiments listed in the Wikipedia article on the double-slit experiment
No evidence was found that anything other than purely physical processes were able to affect the outcome of an experiment and no empirical evidence for a connection to human consciousness was apparent in any of the results.
I asked an AI engine “In the famous double slit experiment, a device is set up to detect which slit a photon passes through. What is the nature of this device and where is the relevant experiment described?“
The answers range from evasive to deranged. Nothing sensible at all is said and certainly no actual experiment involving such a device is presented.
The AI engine is strangely reluctant to give a physical description of an instrument, preferring instead to resort to abstract language and theoretical constructs that only make sense if the basic tenets of quantum physics are already accepted.
What is the nature of a which-slit detector?
Crucially: it is not a little camera watching the photon go by.
A which-path detector is any physical system that becomes entangled with the photon in a way that, even in principle, allows you to distinguish the paths.
The nature of a detector is described by its function, which is to ‘distinguish’, or ‘detect’. This gives no information and tells us nothing about the physical construction. What does ‘becomes entangled with the photon‘ mean? How do we know that such an ‘entanglement’ has taken place, apart from the fact that our experiment no longer works? This is not an answer.
More concretely, such a device must:
Interact locally with the photon at one slit or the other
Leave a different physical state depending on which slit was taken
Store that information somewhere in the universe (even if nobody reads it)
Once that happens, interference is lost.
This is better, it is acknowledged that there must be some sort of physical process that does the detecting. However, although no specific process is singled out, the claim is made that whatever process is used, interference is necessarily lost.
Why should this be so? Why should it be that any specific physical process that is designed for detection somehow ends up destroying the interference pattern? No reason is supplied and no evidence available.
Examples of real which-path devices
1. Recoiling slit or screen (Feynman’s version)
If the slit apparatus can recoil differently depending on which slit the photon passes through, then:
The apparatus carries path information in its momentum
Even if you never measure the recoil, interference disappears
Richard Feynman, The Feynman Lectures on Physics, Vol. III, Chapter 1
This is a thought experiment, not a lab setup—but it’s foundational.
This is a thought experiment and yet is considered ‘foundational’!
The experiment is said to prevent interference even when when no ‘detection’ takes place. Therefore, the disappearance of the interference pattern cannot be said to have been caused by ‘detection’ itself but only by the physical mechanics of the instrument.
The experiment is imaginary and the outcome invented.
2. Polarization tagging (real experiments)
You place a polarizer at each slit:
Slit A → photon becomes horizontally polarized
Slit B → photon becomes vertically polarized
Now the photon’s polarization state tells you the path.
Result:
No interference pattern
Even if you don’t look at the polarization
This is one of the cleanest and most common laboratory realizations.
The experiment was changed with the addition of polarisers and no longer demonstrates interference. This is a different experiment to that which was originally described and the result is therefore irrelevant.
Even if we like the result, we did not do any ‘detecting’.
3. Atomic “which-path markers” (most famous real experiment)
This is the big one.
Instead of disturbing the photon’s momentum, they:
Let an atom emit a photon
Arrange things so the atom’s internal state records which slit the photon came from
The photon itself is barely disturbed
M. O. Scully, B.-G. Englert, H. Walther (1991)“Quantum optical tests of complementarity” – Nature, 351, 111–116
Interference still disappears.
This demolished the old idea that interference loss is caused by “measurement disturbance.”
“The photon itself is barely disturbed” and yet “Interference still disappears”. Maybe the two are connected?
No pure ‘detection’ took place here. This is not the same experiment as that which was originally described.
The deep takeaway (this is the part most people miss)
A which-path detector is not defined by what it looks like It is defined by whether path information exists in principle
Interference is destroyed when:
The photon becomes entangled with anything that carries path information
Even if that information is never read
Even if the detector is microscopic
Even if the detector is later destroyed
No conscious observer required. No collapse button pressed.
I really would like to know what such an instrument looks like, but the engine isn’t going to tell me. A ‘detector’ is defined by “whether path information exists in principle“. However, the existence or otherwise of path information, in principle or practice is surely independent of the existence of an actual physical detection device whether or not it is destroyed later on.
This is, of course, complete gibberish, but must have come from somewhere and does in fact resemble the linguistic acrobatics performed by human-composed texts on the subject. We ask for an actual experiment that demonstrates a scientific principle and instead what we get is a theoretical lecture on why the results would certainly have been produced if only the experiment had only been performed in principle.
The definition here pretty much defines a detector according to whether or not the interference pattern is destroyed, thereby providing a circular definition, using the term ‘entanglement’ as a linguistic MacGuffin.
Furthermore, versions of the experiment that include detectors at the slits find that each detected photon passes through one slit (as would a classical particle), and not through both slits (as would a wave). However, such experiments demonstrate that particles do not form the interference pattern if one detects which slit they pass through. – Wikipedia
This, again, refers to photons that are ‘detected’, in an abstract sense with no physical mechanism described and yet the experiments contained ‘detectors’, which are presumably actual physical instruments. This is supposed to be an article on physics for heaven’s sake!
The phenomenon of ‘detection’ is surely a concrete physical process performed according to a specific measurement protocol and yet the authors seemingly want to ignore the physical processes to concentrate only on the abstract concept of ‘detection’. There seems to be a determination here, and in other areas of physics, to describe the world in purely abstract philosophical terms as opposed to measurable physical processes.
The phrase ‘versions of the experiment’ is misleading. If the parameters of the experiment have changed significantly then we have a different experiment and not a ‘version’ of the same experiment. For two different physical set ups to be regarded as essentially the ‘same’ then they should demonstrate both theoretical and practical equivalence. If they are giving two different sets of experimental outcomes then how can they be said to be the same experiment?
Wikipedia on detectors
The Wikipedia article gives several references describing ‘detectors’.
However:
“…if in a double-slit experiment, the detectors which register outcoming photons are placed immediately behind the diaphragm with two slits: A photon is registered in one detector, not in both…” – Introduction to Quantum Mechanics: Schrödinger Equation and Path Integral – Müller-Kirsten, H. J. W. (2006).
No physical description of a ‘detector’ here.
“It seems that light passes through one slit or the other in the form of photons if we set up an experiment to detect which slit the photon passes, but passes through both slits in the form of a wave if we perform an interference experiment.” Rae, Alastair I.M. (2004). Quantum Physics: Illusion Or Reality?
“It seems that..” – he has not performed this experiment himself nor witnessed a demonstration. He does not make reference to a first hand account of such an event and in all probability has not read a description of one.
Inherently probabilistic
Other atomic-scale entities, such as electrons, are found to exhibit the same behaviour when fired towards a double slit. Additionally, the detection of individual discrete impacts is observed to be inherently probabilistic, which is inexplicable using classical mechanics. – Wikipedia
What does ‘inherently probabilistic‘ mean? There is no such thing. This is a meaningless phrase from the realm of philosophy with no physical definition and consequently does not belong in a theory of physics. If it is undefined then of course it is inexplicable from the point of view of classical mechanics.
The phrase ‘inherently probabilistic’ has no sensible definition in mathematics, physics or philosophy.
If the phrase somehow refers to a mathematical or philosophical construct then we can ask “How does a mathematical or philosophical construct have causal effects in physical reality?” If the phrase is somehow interpreted as a physical process then such a process needs describing and its relationship to the rest of reality needs clarifying.
‘Randomness’ is an outcome pattern and not a generative mechanism: Random events
Conclusion
They haven’t performed this experiment in the way claimed, haven’t persuaded anybody that there is such a thing as ‘detection’ which is somehow independent of physical processes and have not provided any evidence for the involvement of human consciousness with physical reality.
All that happens is that they recycle the same misleading narrative phrased in evasive and deceptive language. The impression is given that there are many important experiments demonstrating the abstract phenomenon of ‘detection’, but no specific instance of this is ever fully described.
If no specific form of a detector is described, then the phenomenon of ‘detection’ is undefined for practical purposes and if this is so, then no downstream deduction can be made.
If no physical form of a detector is described, then we cannot assess in what way the ‘detection’ process might be responsible for the absence of the interference pattern.
However, in those documented cases where the physical form of a detector is described, it is obvious that it is the physical properties of the detector that is upsetting the results and not the abstract process of ‘detection’ itself.
The assertion that only physical processes can affect the physical world seems obvious to most people, but time and again we are asked by physicists to believe that the opposite is true and that physical events are actually driven by abstract philosophical ideas.
Quantum physicists would have us believe that the foundations of physical reality consist merely of statistical laws and that any perceived physical laws are merely some emergent product of such an abstraction.
“Physical laws rest on atomic statistics and are therefore only approximate” – Schrödinger
Almost all of modern physics is divorced from actual reality despite claims to the opposite.
The fundamental nature of Reality
Physicists repeatedly claim physics as a description of the very fundamental nature of reality and indeed the literature of physics is replete with ‘fundamental’ constants, forces and principles. We are therefore justified in criticising it on these grounds.
Newton’s gravity
Newton’s theory of gravity is a form of ‘action at a distance’ according to both Newton and Einstein. We are entitled to reject it on these grounds alone.
It cannot explain perihelion precession or eclipse effects (Van Flandern)
Nonsense. Density is a scalar value – it has no direction. All objects fall to Earth at the same rate regardless of density. There is no empirical relationship between density and acceleration. Density has nothing to do with gravitational attraction.
What about mass?
Mass is a scalar value – it has no direction. All objects fall to Earth at the same rate regardless of their mass. There is no empirical relationship between mass and acceleration. Passive gravitational mass has nothing to do with gravitational attraction. Gravity debunked
Newton’s bucket
The experiment of Newton’s bucket is of paramount importance. An absolute frame of reference for acceleration and rotation has not been determined and so ‘all’ physical sciences are without a solid foundation.
Einstein did not solve the problem of Newton’s bucket, but without a frame of reference, velocity, acceleration and hence gravity are undefined. Therefore all downstream theory is invalid.
The science of QM is held to be the most thoroughly tested model of reality that we have and yet it does not describe gravity at all. It cannot cope with a phenomenon that occupies 100% of the known universe.
Quantum physics and linearity
Quantum physics and linearity
Quantum physics describes the world as a linear superposition of ‘states’. Each state is the solution to a linear equation. Therefore expect a linear reality but instead see non-linearity everywhere.
This is grounds enough to reject the entire theory.
Quantum slit detector
Particles are said to behave differently if a ‘detector’ is present. There is no record of an experiment with any ‘detector’ that does not significantly alter the physical processes of the set up.
This is a hoax.
The Shnoll effect
Radioactive decay is described as being of ‘random’ origin but Simon Shnoll showed that rates of decay demonstrated both lunar and seasonal cycles.
This directly contradicts the prevailing quantum explanation.
A swinging pendulum is claimed to prove that the Earth is round and that it is spinning relative to ‘absolute space’. No credible, reproducible instance of this experiment with suitable control can be found.
A ship’s gyroscope will turn to point towards the physical North Pole, not magnetic North.
This behaviour is unexplained by contemporary physics and has no conceivable explanation.
Physics is missing something here.
Flight and Bernoulli’s principle
There is no credible explanation for the phenomenon of flight. Bernoulli’s principle is usually trotted out as the cause but plenty of aeronautical engineers on YouTube explain or demonstrate that this is simply not true.
Another hoax.
Electricity
We are told that electricity is comprised of electrons flowing through a wire but this is contradicted by Richard Feynman, a YouTube video from Veritasium and statements by other physicists.
Geocentrism
There is no well-defined absolute frame of reference for cosmic movement, acceleration or rotation. There is no defined centre to the universe.
Accordingly, no discussion of ‘centrism’ makes any sense whatsoever.
For practical purposes, just choose what seems best.
Flat Earth
Globers can no longer rely upon much of Newtonian physics to support many of their arguments but there is still sufficient evidence to propose a globe Earth as the best solution
Boats disappear below the horizon bottom first, stars appear to rotate around the poles..
Causality
Physics, particularly Newtonian physics uses the language of causality but fails to provide a decent definition of such an idea.
There is no symbol for ‘causality’ in commonly used mathematics and so no means of expressing such a notion.
An idea from Quantum Physics but with no sensible definition or representation in terms of mathematics; there is no ‘indeterminism’ operator.
The idea is, by means of convoluted language, to present ‘randomness’ as a causal mechanism rather than a statistical outcome.
No charge
There is no such thing as ‘charge’. It has never been measured directly, is known only through the electric fields it is said to generate and has no other measurable properties.
It is surplus to requirements and can be replaced with an electromagnetic field construct.
Static electricity
Static electric fields are said to emanate from ‘charge’ and to keep instantaneous synchrony with such charge.
This, by analogy with Newtonian gravity, is ‘action at a distance’ and can similarly be discarded as a credible theory.
Clocks were flown around the Earth. Einstein predicted a time difference owing to speed, Konstantin Meyl claims reduced gravity at altitude as the cause.
Meyl’s calculations gave a closer prediction than Einstein’s.
Energy is not in fact conserved and is relative rather than absolute (Hossenfelder). There is no consistent definition of energy as a physical process. To describe completely different processes (kinetic, thermal..) in the same terms is highly misleading.
Aether physics
The phrase ‘aether physics’ crops up more and more now but that is as far as it gets; a mere repetition of the phrase. This is in response to the failure of relativity but until the intrinsic properties of the aether can be described it isn’t really helping.
Photons
From AI: As a fundamental particle, a photon is generally considered to have no diameter, radius, or fixed “volume” that it occupies, acting in some aspects as a 0-volume point.
How does something ‘act like a 0-volume point’?
Are photons ‘fundamental’ or not?
DNA as a blueprint for life
This is a silly idea refuted by its own description and by lack of experimental evidence. There is not enough information in DNA and no physical mechanism for either translation or transcription.
Mainstream science has no explanation in terms of fundamental physics for the phenomenon of flight. The usual explanation involving Bernoulli’s principle is not fundamental, is irrelevant to this particular problem and is, in any case, not sound science. A better solution is that a vortex system is induced above the wing, thereby creating the necessary low pressure volume needed to provide ‘lift’.
The conventional explanation
The conventional explanation is that the curved profile of the wing somehow results in faster flowing air over the top of the wing and that this inevitably creates a volume of low pressure somehow. The wing moves from high to low pressure and the plane ascends.
So the air that flows over the wing has a longer surface to travel along and gets acceleration due to aerodynamics, running at a higher speed than the air below. In this way, a pressure difference is created between the air above and below the wings generating a lifting force under them that facilitates the aircraft’s suspension in the air. – World Aviation Flight Academy
Nothing in this explanation is credible and none of it has any foundation in either experimental evidence or fundamental theory.
In order for air to accelerate some sort of force is necessary and it is not explained where this force comes from
The explanation ‘gets acceleration due to aerodynamics’ is insufficient
Any explanation that the air moves faster in order to get to the other side in time is not an explanation in terms of basic physics
The two streams do not reach the trailing edge at the same time anyhow
The calculated lift is much less than the observed lift
It is not explained in terms of fundamental theory how high flow rate is causal in the production of low pressure. An association between the two depends upon the specific circumstances
The flow over a wing does not look like this in practice
Aeroplanes can fly upside down
Supersonic aircraft have flat profile wings
Experiments purporting to demonstrate Bernoulli’s principle are hoaxes (see video below)
A solution from vortex physics
Aeronautical engineers are in agreement that ‘lift’ is created via the formation of a low pressure volume above the wings of a plane, but are puzzled as to how this phenomenon should arise in the first place.
Hypothesis: ‘Lift’ is actually created by the formation of vortex structures on the upper side of the wing. The structures are analogous to a tornado, which initially moves air inwards, whence it is then directed upwards, thereby creating a low pressure area by actually removing a volume of air from the central location.
In the case of an aeroplane, the air along the wing is directed towards the wing tips where the vortices are seen to ‘shed’. The vortices keep pace with the wing and the continual transport of air away from the body of the plane towards the wing-tips is what maintains the low pressure volume above the wings and provides the lift.
The general properties of vortices
Vortices are certainly present in flight
Vortices possess self-organising properties
Many smaller structures may merge to form a larger structure
Smaller vortices may remain separate
Vortices are certainly powerful enough to have a measurable effect on flight and engineers spend considerable effort in trying to reduce the degree of drag caused by such
Air is moved inwards at first towards the vortex centre and then longitudinally along the vortex structure
A pressure gradient is always formed with pressure decreasing towards the centre of the vortex
The pressure gradient causes adiabatic cooling of the air towards the centre
This cooling can cause condensation of water vapour and even the formation of ice crystals as is seen in the case of tornadoes
Vortex formation is a ‘least energy’ solution to fluid flow and is therefore the default flow structure in most situations
A vortex layer at the wing surface is an obvious solution to the no-slip boundary condition
A visualisation (above) of the creation of a turbulent vortex layer on the upper surface of a wing.
The idea is that these eddies quickly self-organise into a coherent structure which moves air towards the wing tips and cools the air as it does so. This can be seen in the shot below, where a condensation cloud above the wings indicates the presence of low pressure whilst the transported air is shed as helical vortices from the wing tips, again displaying condensation effects.
The image is taken from the video below which is worth watching in its entirety.
The diffuse clouds above the wings form as the plane increases in speed and a split second later we see the helical shedding from the wing tips. This is supportive of the idea that the wing-tip vortices actually originate from the larger surface area of the wing as a whole.
In the wake of the aeroplane, truly giant vortices are seen. We can ask where the required energy came from to enable these formations; did it really just come from the wing tips disturbing the air flow?
More likely the energy came from the total disturbance caused by the movement of the whole plane. Vortices form at the wing surface, self-organise into larger vortices or filaments, move to the wing tips, are shed in helical form where they re-organise yet again according to climatic conditions and finally form the spectacular formations we see in some of the clips.
The constant adaptation of shape is typical of vortex structures.
Delta wings
Delta wings are not only a different shape as compared to conventional wings, but also have a flatter profile which effectively rules out the Bernoulli principle as a source of lift.
So how do they work?
The vortex structures simply adapt to the local environment, arranging themselves longitudinally along the wings and thereby providing the required lift.
Mainstream lectures
Many lectures by mainstream engineers are available to highlight the problems with conventional explanations.
The idea that the sequence of base pairs in DNA represents any sort of blueprint for life is nonsensical from an informational theoretical point of view. The sequences are meaningless by themselves and require systems of translation and transcription which themselves require the existence and maintenance of unfeasibly large amounts of information.
This refutation applies to all systems where ‘information’ is regarded as an abstract entity and divorced from any physical function.
The idea of DNA as ‘information‘
Mainstream science tells us that the ordering of the base pairs in a strand of DNA represents some sort of blue-print for living systems. Depending upon who you read it can represent an entire organism or just the structure of proteins in the body. Either way, the idea is unfeasible.
The base pairs of DNA constitute data, not information, and without interpretation they are really meaningless strings of digits. There is no obvious code for a protein written into a DNA strand and no reference to any laws of physics or biology; all we have so far is a stream of ‘bits’.
The storage of data as a stream of DNA base pairs may be appropriate for stable storage and integrity during reproduction but it will not, of itself, lead to the development of a new organism.
To convert this stream of bits to anything resembling organic life, we therefore need to translate the bits from this coding scheme to one that is more representative of the laws of bio-chemistry and then somehow implement these physical instructions to construct a real entity. Scientists know this and refer to these steps as translation and transcription respectively.
Translation
DNA has about 3 gigabytes of stored data but we need to be able to interpret this data and translate it to a series of protein coding schemes or something similar. The question arises then as to how much data is needed for the translation scheme itself.
An analogy is that I want to send a Shakespeare sonnet to someone in China but they don’t speak English so an English-Chinese dictionary needs to be involved. A sonnet contains a mere 24 lines of text but the dictionary needs to contain every single word in the English language, just in case it is present in the sonnet.
The dictionary in this case then must contain vastly more data than the information to be translated.
What size of dictionary is required to translate all the potential data in a genome? ‘Unfeasibly large’ appears to be the answer.
Maintaining integrity of the data
The volume of data isn’t the only problem; we have to ensure that it is stored somewhere, free from corruption and somehow inherited. We need to specify some medium in which this data is embodied.
If we say that the integrity is maintained by error correction then we now need extra data and extra functions to implement the error correction and these themselves must be error free.
The mechanisms for error correction, translation and transcription need to be precisely inherited themselves and again require the presence of extra information.
The embodiment of biological data as a digital system has not solved any problem at all but instead added extra problems to solve with now exponentially larger quantities of data. The whole scheme actually necessitates an infinite regression of encoding and error correction.
Transcription
In addition to a dictionary for translation, we need some mechanism for transcription. The translated information coming from the DNA needs to be input into some physical process which will go on to construct proteins or whatever. So what does this process consist of, how was it constructed, where is the information for this and how was such information inherited? The information cannot be contained in the DNA itself because it was needed to construct the machinery that extracts information from the DNA in the first place.
We have managed to describe another infinite chain of regression, this time for the transcription process.
A generalisation of the problem
The problems above are described with reference to DNA but clearly apply to any digital encoding scheme within biological systems.
The central problem is that digital data is just a string of bits and at some time this will need to be converted to a real entity via the laws of physics. There are no laws of physics in a stream of bits, no feedback systems and no energy to drive the process along. All these must come from somewhere else.
The whole narrative draws attention away from the practical problem of manufacturing a cell and just points to the ordering of base pairs as somehow a great discovery.
The same problem will arise whenever a data stream is regarded as source of ‘information’ and whenever the idea of ‘information’ is regarded as an abstract mathematical entity with no concrete relationship to the laws of physics or bio-chemistry.
The solution in abstract
The solution then is to stop regarding ‘information’ and physical structure as separate entities and acknowledge that within biological systems at least, biological information must consist of ‘functionality’, i.e. it must consist of some concrete physical entity that is capable of getting things done.
Information must be in some sense ‘absolute’ and related to the laws of physics in order to remove the need for both translation and transcription. Biological information cannot therefore be digital or ‘abstract’ in nature.
A concrete solution
Konstantin Meyl, in his book “Scalar waves..”, has stated simply that: “(biological) Information is the structure of a scalar wave.”
A scalar wave in this case is an electromagnetic structure as described by Tesla which is likely found throughout biological systems. See: The nature of the bio-field
This proposal fits all of the requirements for biological information.
Such structures are inherently self-stabilising
They have their own motivational force
Will propagate along appropriate biological conduits
Have their own intrinsic energy
Additional energy may be absorbed from the environment
Energy transduction enables ‘persistence’
Energy transduction enables ‘function’.
Specific characteristics enable specific function
Electromagnetic nature enables direct interaction with the bio-field
Obviates the need for translation and transcription
These are the requirements that we need in abstract. There may be other physical constructs which implement these features, but scalar waves seem a very good fit.
This post looks at the ‘water engine’ described by Gerald Pollack in his book The Fourth Phase of Water. The explanation from Pollack that the flow is driven by forces within the tube arising from the action of EZ water and ion-pressure is criticised and an alternative hypothesis given which regards the phenomenon as consisting of the entirety of the flow within the container. Such a flow arises from the natural formation of a global electromagnetic field structure which permeates the whole body of fluid and organises individual water molecules into an overall toroidal flow pattern.
Researchers at Pollack’s lab placed a capillary tube into a water bath and found that water began to flow spontaneously through the tube with no obvious driving mechanism. How does this happen?
We found that immersing tubes made of hydrophilic materials into water produces flow through those tubes, similar to blood flow through blood vessels. The driving energy comes from the radiant energy absorbed and stored in the water. Nothing more. Flow may persist undiminished for many hours, even days. Additional incident light brings faster flow. This is not a perpetual motion machine: incident radiant energy drives the flow — in much the same way that it drives vascular flow in plants. And, we have fresh evidence .. that it also assists the heart in driving blood in the cardiovascular system. – Gerald Pollack
A short clip of the actual experiment is to be found here:
Pollack’s hypothesis
After absorbing incident radiant energy, water molecules at the interfacial region break apart (as in the first step of photosynthesis) . The OH− groups coalesce to form the EZ (exclusion zone), a highly ordered, negatively charged zone, forming adjacent to the interface, while the complementary H+ components are released into the core of the tube. The core thereby acquires high positive charge. That positive charge creates a gradient with the uncharged external bath beyond the tube’s exits, driving a flow down the gradient, one way or the other depending on which end dominates
The scheme is shown here in a flared tube which highlights the proton ‘pressure’ gradient, but nevertheless seems to work even if the sides are parallel.
Problems with the EZ explanation
The totality of the flow is not considered, only the local tubular flow. This is important as the water exiting the tube must somehow produce sufficient force to make it all the way round to the input at the other end.
The flow depends upon the continual release of protons from the EZ layer as the layer is built. The EZ layer cannot build indefinitely, but the flow itself is said to continue indefinitely.
The idea of protons being ‘released’ from a strongly charged negative layer suggests that it is the most natural thing in the world for a positively charged particle to move away from a strong negative charge and, moreover, that it has been trying to do so for some time. This is not the case.
Even if protons are ejected somehow from the EZ layer, they are still adjacent to it and still subject to a strong electric field; they should be attracted to such a field.
It is inconsistent to attribute the action of the protons to a repulsive force between themselves whilst ignoring the attractive force between the protons and the EZ layer.
No mechanism is described by means of which infra red light can build hexagonal water.
A slight reformulation of the problem
The problem is described as one of how to describe self-generating flow in a tube but what we actually see is the spontaneous organisation of an entire tank of water into an overall vortex flow which guides the fluid towards the entrance to the tube, along the tube towards the exit and then away from the tube and around the tank to complete the cycle.
A hypothesis based upon vortex physics
The overall flow is to be considered and takes the form of a torus similar to that seen in the image of the apple, above. The water flows through the tube at the centre and upon exiting, flows in an organised fashion around the whole container driven by local electromagnetic forces only which continue to direct flow back into the tube on the left of the picture.
Incoming energy is transduced at each point in the container and amalgamated into the overall toroidal flow which self-organises around the central tube, the ‘core’ of the flow into a dynamic vortex structure.
Flow outside the tube is powered locally by a toroidal electromagnetic field which flows inwards towards the central tube where it forms magneto-electric ring vortices which propagate within the material of the tube itself and whose outer reaches serve to drive the fluid within the tube and help form the exclusion zone.
We need to look at some supporting evidence for this before revisiting it in greater detail. We will show:
Vortex movement is typical in fluids and gases and is associated with electric fields
The movement of water is in the same direction as the electric field movement
The organisation of the electric vortex field precedes the movement of the water molecules
Energy transduces freely between heat, infrared radiation and kinetic vortex movement
A ‘seed’ structure within water can easily propagate throughout a much larger volume
Electric field movement and kinetic movement of matter are synonymous at the atomic scale
A ‘seed’ structure
A bottle of water can be cooled to well below freezing temperature but still remain as a liquid. If the bottle is given a knock or if a crystal of ice is placed in it, the water will rapidly freeze, with the ice formation spreading out rapidly from the nucleation site.
Some ‘signal’ is spreading throughout the liquid and promoting some alternate organisation of the water molecules thereby forming the ice crystals. The signal is an electromagnetic field as there is nothing else on offer. The signal is not pure ‘information’ but a de facto mechanism for restructuring the liquid into a solid.
The hypothesis for the water engine will say that the tube inserted into the tank acts as a ‘seed’ to promote the restructuring of all of the water in the tank by first creating a toroidal electromagnetic field structure and thence the rearrangement of actual molecules to allow for the kinetic motion.
Energy transduction in water vortices
In one experiment described in Gerald’s book: The Fourth Phase of Water, a glass of water is stirred and the temperature is taken before and after. As more energy is added via the stirring, the temperature of the water drops by several degrees. This is counter to most expectations.
The stirring has promoted vortices and this has somehow sequestered heat energy into some fractal vortex field. Water molecules are said to be continually spinning via the movement of their hydrogen ions at least and so this is where the energy must be going.
An electromagnetic vortex field has been formed by the stirring and energy in such fields forms a fractal cascade towards smaller and smaller scales allowing the storage of potentially large amounts of energy.
However, the spinning hydrogen ions and electron shells at the molecular levels have their own local electric fields which cannot exist independently of the global vortex structure. They must be affected (organised) by such a structure and in turn must contribute to it. Kinetic movement and field movement are therefore almost synonymous at this scale.
Tornado formation
Mainstream opinion is that tornado and hurricane formation is solely a kinetic affair, with energy supplied to molecules via heat differences between land and sea casing the molecules to form strong vortex patterns giving rise to structures we observe.
Many advocates of the Electric Universe model, however, are of the opinion that much energy comes from the sun in the form of magnetic flux ropes, solar filaments or electromagnetic Birkeland currents and it is this energy that is responsible for the formation of the phenomena we see.
We can easily accommodate both views if we allow that energy may well come from heat differences and convection but that the vortex streams from the sun are the primary organisers of the rotary structures for the tornadoes. Electromagnetic fields form vortex structures of themselves and organise the kinetic movement of air via electromagnetic forces into actual tornadoes.
Energy accumulates centripetally according to the laws of electromagnetism and a strong vortex ‘radius’ or tornado ‘wall’ forms. Electromagnetic discharge in the form of lightning confirms a link between kinetic motion and electromagnetic field motion.
The vortex patterns of air and electromagnetic fields are identical, the movement of the air following the movement of the field.
The air moves faster towards the vortex radius and, coincident with this, we see a temperature drop and the formation of hailstorms. Heat energy has either been sequestered somewhere or has been transduced into vortex form and is actually driving the accelerating wind vortex.
Helical flow as a response to acoustic waves
Water needs little encouragement to flow in spiral formations, whether it be whirlpools or helices as we can see in the video below.
The energy for the water stream comes from gravity, but a small, barely perceptible impulse causes the water to flow in a striking helical pattern. There is no ‘information’ as such in the sound waves to structure the water this way and so the water itself is self-organizing into a helical flow in response to the stimulus.
Very little extra energy is being introduced into the system, but existing energy is somehow encouraged to organise itself by the stimulus. Classical molecules by themselves cannot behave in this fashion and hence some global ‘field’ is responsible for the regularity.
An alternative model for the atom (Meyl)
Konstantin Meyl describes a gaseous oxygen molecule as having the inner electron shell as having inverted somewhat to appear on the outside of the atom as seen below. Each electron is a pure electromagnetic field structure which rotates both of itself and around the nucleus at the same time.
Liquid state water molecules can be supposed to be similar to this but with the addition of two orbiting hydrogen ions which themselves rotate.
All this rotation amounts to a de facto electromagnetic vortex field and de facto source of energy. Movement of such a field is synonymous now with kinetic movement and needs minimal organisation in order to do something interesting.
Part 2: Respiration of gas from the air – Konstantin Meyl
An alternative model of a photon (Meyl)
The classical photon is a point particle with no volume or mass and so no physical presence in reality. Trying to understand how such an entity is converted to kinetic energy for example is a losing battle and so scientists will simply say that it is so and that energy is conserved somehow.
This is wholly inadequate and a better formulation of a photon comes from Konstantin Meyl who describes it as a pair of leapfrogging electromagnetic field vortices – see video below.
These entities have a physical description in terms of Maxwell’s equations and a ‘frequency’ arising from the leapfrogging motion. They have physical dimensions and a discrete quantum of energy arising from the field motion.
Any such phenomena will interact with another electromagnetic field structure in accordance with the existing laws of electromagnetism.
The hypothesis in detail
A container of water is at ‘rest’ but there is much movement at the molecular level with spinning electron shells, rotating hydrogen ions and some sort of vibration representing ‘heat energy’.
The water is full of electromagnetic vortex energy, but as yet it this consists of only local vortices at the atomic scale with no global organisation.
A hydrophilic capillary tube is placed within the container and as a consequence of its shape and conductive properties, acts as a seed for the propagation of an organisational effect through the water.
The propagation is that of an electromagnetic field and so happens at close to the speed of light. No external input of energy is required at this stage as all that is happening is that the existing electromagnetic field vortices are becoming electromagnetically aligned according to a global vortex pattern.
Rotational field energy which was once random and produced no average kinetic motion is now organised according to a vortex system and is now having additive effects and thereby producing movement in the water system as a whole.
The whole system has become electromagnetically aligned in an instant and in accordance with the laws of electrodynamics. It now starts to initiate kinetic movement in an overall toroidal flow. Movement is now of ‘matter’ and so inertia comes into play but the flow nevertheless seems to start almost immediately and everywhere at once; no portion of water is seen to be ‘pushing’ another in an obvious manner.
The water molecules are aligned in ‘propitiously’ and contain their own internal energy system in the form of rotational field motion. It is this energy that is translated to linear movement of matter at every molecule in the system. We do not therefore have a small volume of water in a capillary tube pushing around large volumes of water in the tank, but instead, a large volume of water organises to move a small but powerful stream into the tube. Guided by the electromagnetic field, all the water in the entire system starts to move under its own energy and all at the same time.
The field is dynamic; it flows towards the tube where energy builds up and adopts the form of a magnetic ring vortex, whereupon it will travel inside the material of the tube itself before discharging at the exit, into the water bath.
The movement of matter is initiated by the electric torus field already formed, with molecules moving in a vortex pattern, towards the opening of the central tube and dutifully queueing up before entering.
The system is working as observed but still some energy is wasted and so a net input of energy is required to sustain permanent flow.
The input energy is in the form of photons as described above. These photons are of various frequencies and whilst some will pass straight through the water to little effect, others will be absorbed by the ambient vortex field and their energy distributed throughout the flow to enhance its strength and integrity.
We now no longer have to describe how electromagnetic wave energy is somehow transduced to the movement of matter as everything is now described as a single electromagnetic vortex field. Photon, molecule and field are now all a single energetic substance and all with a single ‘intent’ which is to amalgamate into the overall self-organising electromagnetic flow.
Pollack mentions that the rate of flow through the capillary tube increases as the diameter of the tube decreases. This may make sense if the mechanism of flow is related to what happens inside the tube but it is also consistent with the above hypothesis. The flow is driven by the entirety of the body of water and is hence independent of the diameter of the tube. A narrower tube will necessarily lead to faster flow in order to transport a comparable amount of water through a narrower channel in the same time interval.
The exclusion zone
In Gerald Pollack’s description, the exclusion zone is caused by a hexagonal water structure and is fundamental as a cause of the flow. In the description above, however, this zone is incidental to the flow and not causal.
An electromagnetic field is the main organisational and motivational element and so it is here that we must look for an explanation.
The capillary tube itself is an ideal conduit for the flow of some sort of ring vortex whether it be electrically aligned with the tube or magnetically aligned. Field movement within the tube doubtless induces field movement within and around the tube and it follows that it is this movement which somehow promotes both the movement of the water and the centripetal accumulation of nano-particles within the tube.
The exact mechanism by which the particles move remains mysterious but several authors point to simple diffusion mechanisms as being sufficient.
Konstantin Meyl
Meyl, in his book Potential vortex vol. 5 gives two hypotheses for the exclusion zone:
The zone contains such a strong negative charge that a layer consisting entirely of electrons is to be considered.
The zone does in fact consist of hexagonally structured hydrogen and oxygen molecules but held together by the magnetic dipole forces arising from spinning electrons.
In neither case is it explained why such structures should arise in the first place and nor is any empirical evidence supplied of the actual existence of a hexagonal structure.
The water engine as a driver for blood flow
The water engine can easily be seen as a primitive circulatory system.
In the interpretation above, the water is not driven centrally by a pump, but instead, incoming energy is transduced directly into kinetic movement at every point in the fluid. The water adopts a toroidal flow pattern without any apparent guide and is driven by the basic laws of physics. This is a highly efficient and reliable mechanism; the circulation does not depend upon a mechanical pump and water will continue to flow as long as there is an energy input.
In the development of a chick embryo, input of infra red energy speeds up the flow and the rounded shape of the egg is ideal for the development of toroidal flow. Blood has been seen to circulate before the heart starts to beat and even before it has developed.
Circulation of the blood begins even before the arterial system has formed and the blood vessels are seen to develop from the existing flow and not the other way around. In other words the flow pre-exists the physical structures that will eventually guide it.
This is good evidence that the whole development is initiated by a toroidal field structure such as happens with the water engine. Energy is transduced and accumulated to form the tissues that will eventually become blood vessels. Vascular morphogenesisThe morphogenesis of capillaries
The basic structure of the flow continues even after the physical blood vessels have formed, with the heart being driven largely by the blood flow and not the other way around: The Heart is not a PumpThe Heart and Circulation
The origins of life
If sunlight should shine on a restricted volume of water then no doubt convection currents would appear and an overall toroidal flow would emerge; even without the central tube to guide the flow this will happen. Incident energy will be transduced and will contribute towards the flow.
We already have energy transduction and accumulation. We already have a self-organising and self-regulating system of continuous toroidal flow of both energy and water. We already have both circulatory system and a primitive metabolism.
If minerals or other particles are caught up in the flow then they are transported inwards towards the vortex centre where the energy is at a maximum and where the possibility of some sort of biological transmutation is greatest. See: Transmutation
Energy supply is continuous and regulated via the vortex principle; material transport is centripetal. These are the initial requirements for the initial construction of a self-replicating cell structure.
Remarks
The hypothesis given above may seem highly unlikely indeed from the point of view of classical physics but from the perspective of vortex physics is pretty much the behaviour of water that we expect.
In classical physics, electric fields are either static or sinusoidal and molecules are incapable of self-organisation. Transmutation of energy is unexplained and fluids require the application of an external force in order to initiate movement. Such a force may be in the form of gravity, pressure or an external electromagnetic field, but none of these is relevant to this experiment.
What we have is an internal energy supply in the form of the rotation of electromagnetic ‘molecules’ which self-organises to produce its own flow around the container via movement at the molecular scale according to local forces only.
The idea that force and matter are separate entities and that the one acts upon the other is a crippling assumption here and needs to be discarded.
‘Movement’ is the result of ‘field interaction’ at every point in space and in this respect is similar to the action of gravity, where the nature of the field results in an inevitable directional acceleration through the field. The mass of an object is irrelevant to the acceleration under gravity which suggests action upon every single molecule or atom almost simultaneously.
Comparison with gravity
The hypothesis above is reminiscent of the hypothesis for the movement of matter by a gravitational field. See: Gravity as an inertial field In both cases movement can be described as ‘inertial’ in nature, with the field operating upon each and every atom at the same time, making the movement independent of any external forces and independent of the volume of matter involved in the movement.
No pressure wave is seen to be pushing the water around and it appears to move spontaneously of its own ‘will’. One way of understanding biological systems is to regard such behaviour as fundamental in the same way that gravity is regarded as fundamental in physics. The difficulties of understanding the mechanism at the molecular level are thereby circumvented and the phenomena of blood flow, morphogenesis, development and energy transport in living systems now have a common basis in physical reality.
Potential vortex, newly discovered properties of the electric field are fundamentally changing our view of the physical world – Konstantin Meyl https://www.meyl.eu/go/indexb830.html
This is a response to Clare Craig’s essay in the Daily Sceptic titled: The Unsolved Mystery of How Viruses Spread – and Why Germ Theory Isn’t the Whole Answer which asks why the epidemiology of influenza does not support the idea of viral transmission.
Clare is highly unusual amongst virus believers in taking an interest in the epidemiology of influenza and in attempting to explain it. However, this is the correct way to proceed. Evidence must be explained; this is an immutable rule of scientific endeavour. It is no good having a nice sounding theory of small particles, genetic sequences and immunity theory, if your predictions simply do not accord with reality.
If the predictions of virology are insufficient to explain the epidemiology, then there must be some additional or alternative cause of disease which leads to the patterns we see. Clare gives some plausible mechanisms for these.
Compare with the views of the emerging ‘no-virus’ movement who have worked out that no virus has ever been isolated but take no interest whatsoever in the epidemiology of disease. Andrew Kaufman has stated in his interview with Steve Kirsch that “Epidemiology is not science” and Tom Cowan is claiming that there is “no such thing as disease” but that all symptoms are really just signs of the body healing itself and are therefore beneficial.
This is no way to make progress. The epidemiology of influenza is key to its cause, as we have nothing else to work with.
The Unsolved Mystery of How Viruses Spread – and Why Germ Theory Isn’t the Whole Answer – Clare Craig [link]
The essay makes some key claims and presents arguments for each:
Viruses exist and are the cause of disease (this post disagrees)
The epidemiology of influenza does not support the orthodox view of viral spread (correct)
Some other seasonal influence is at work (this post agrees and identifies such influence as some sort of disturbance of the Earth’s magnetic field)
Evidence for viral contagion
Viral genetic material turns up in clusters of sick people. The sequences match. They change over time with new mutations in consistent ways without reverting. Even though testing is not perfect, people with positive tests are far more likely to be sick than not. – Clare Craig
This is not evidence for viral contagion.
The existence of viruses has not been proven and so it is premature to talk of ‘viral genetic material’. Moreover, ‘genetic material’ is said to be present in tissue cultures and not within living organisms; this is the whole aim of so-called viral isolation.
No virus has adequately isolated and so we can never say with any certainty whether or not a particular genetic sequence originated in a virus or somewhere else.
As a consequence of this we can never say with any certainty that changes in measured sequences are the result of ‘mutation’. All that we know is that the results of certain laboratory procedures and software routines produce somewhat reproducible results which vary over time and seem to correlate with disease. Everything else is mere interpretation.
One alternative interpretation is that the body is responding to seasonal variations of the Earth’s electromagnetic field in a stereotypical way. Sick people are in a ‘state’ where metabolism, regulatory processes and gene expression are significantly altered and something of this new state persists in the tissue culture. It is this regulatory persistence which then gives rise to the patterns we see with PCR tests and sequencing results.
Such patterns in the results are then misinterpreted as mutations. The lack of reversion may be caused by the ever changing magnetic field or by the body’s tendency to adapt to almost any stimulus and to produce a noticeably different response when it next encounters a similar stimulus.
The lack of reversion is thereby explained along with the apparent rapid global spread of novel variations in sequences which is now not caused by transmission at all but by the propagation of electromagnetic effects across the globe.
Another interpretation is that a tissue culture forms a biological system of itself which is capable of receiving and interpreting seasonal cues from the environment. In this case, the resulting sequencing results are less related to the state of the original host and more related to the laboratory procedures.
Disease correlates with season and so PCR tests correspond with both season and disease.
PCR and sequencing equipment work by measuring small changes in electrical voltage and so we cannot rule out that seasonal phenomena in the Earth’s magnetic field may have some effect upon the outcome of these procedures by directly influencing the mechanics of the equipment itself. Magnetic variations can be very strong; see the Carrington Event.
Viruses have been well described. Even if isolation methods are not flawless, electron microscopy and crystallography have shown fine-grained details including the shape of structures like the surface of the spike protein.
Viruses have not been adequately isolated and we therefore cannot say that any image seen through an electron microscope is a virus, no matter how well described the morphology. There is little point in looking at a ‘spike protein’ if you can’t demonstrate its function and can’t demonstrate transmission.
Spike proteins may be present in sick individuals but the protein arises from the regulatory status and not the other way around. The spike is a result of sickness, not the cause of it and does not itself constitute a virus.
Morphology, no matter how distinctive, is not proof of function, not proof of virus and not proof of causation of disease.
At high doses, transmission works. Human challenge trials demonstrate that viral exposure can cause illness when the dose is high enough.
The referenced link shows no human to human transmission, no transmission via breath and no transmission via droplet. It did not isolate a virus properly and therefore did not demonstrate viral exposure via any means at all.
All these arguments presuppose that the existence a virus has already been established and that such a virus has been successfully isolated. Neither of these has happened and so no transmission study is worth anything in this respect.
Genetic tracing of viruses during outbreaks shows that distinct lineages spread from person to person in predictable clusters, confirming person-to-person transmission.
No lineage has been seen to ‘spread from person to person’. Influenza occurs in geographic clusters and such a clustering is merely assumed to have been produced by transmission.
Genetic sequences are assumed to come from a virus but this is not yet proven. The tracing of a sequence is, of itself, not proof of the existence of a sub-microscopic particle containing genetic material and is not proof of any causal link between such sequences and any actual disease.
If individuals are responding in predictable ways to disturbances in the Earth’s magnetic field for example, then this will give the impression of transmission. Whole communities will get sunburn at the beach but this does not confirm person to person transmission.
This evidence explains the mechanism of viral infection. But it does not explain the timing of the waves of infection that are characteristic of many viruses like influenza and COVID-19.
No, the evidence does not explain the mechanism of viral infection.
Genetic tracing looked at group clustering, not individual mechanisms
Genetic coincidences in clusters says nothing about a mechanism
Purported morphology of viruses has no relevance to mechanism
High dose nasal inoculation is not a natural transmission mechanism
Big Fail.
Where the traditional model fails
Normal-dose challenge trials often fail. The evidence here is strong: under experimental conditions, exposure frequently does not result in illness. A recent study confirmed this again.
“Often fail”? – “Invariably fail” is closer to the truth.
Influenza comes in seasonal waves at specific locations and that is that. If you arrange a trial outside of flu season you will get no new patients. If you attempt a trial during flu season then either nobody gets ill (wrong location) or a reliable 10% of patients will get ill.
Proving actual transmission will be another Big Fail though as control groups will get as ill as the rest of them. See studies in Antarctica, submarines, cruise liners and space stations.
Hospital-acquired infections peak at the same time as cases in the community. If spread were primarily driven by close contact, we would expect a lag, as community infections peak then admissions then within hospital infections. But the expected lag does not occur. In fact, hospital-acquired infections peak before the admissions to the hospital.
Influenza is caused by exposure to antipathetic EMF whether it be from natural or man-made sources. These are invariably localised, can entrain to a single individual and can penetrate the roof of a building. There is no escape from them: Influenza is a regulatory disorder caused by changes in the weather
The conditions in hospitals with a high degree of ambient EMF, 5G monitoring equipment, patients in close proximity and lack of fresh oxygen (What causes pneumonia?) further weakens patients and expedites outbreaks in hospitals.
I asked a local covid nurse if she was worried about catching the disease. She replied: “No, nurses have very good hand hygiene” She was not worried about transmission by breath.
Doctors, nurses, dentists, care workers, shop till assistants have theoretically far greater exposure than the rest of us and should therefore be sick almost all of the time. This never happens.
Waves occur with seasonal regularity. Epidemic peaks in the UK often occur with peak deaths at predictable times of year before falling away for a time
It is often claimed that respiratory virus waves are driven by viral evolution or viral interference. In this view, waves happen when new variants arise that can evade existing immunity, which increases transmissibility and allows the virus to reinfect previously exposed populations.
The reality is .. much harder to explain by mutation or interference alone.
The timing is too consistent
In the UK and many other temperate countries, waves have arrived roughly every 13 weeks, across multiple years and virus types. This rhythm has held steady through variant shifts, travel restrictions and mass behavioural changes. Random events like mutation and competition do not produce this kind of precision.
Influenza arrives with a seasonal rhythm, strongest at winter solstice, but with smaller waves in summer, spring and autumn. The timing is slightly different for different regions and there are latitudinal patterns. Influenza and weather
Submariners are not protected from infections
Quite. Seasonal changes have an effect even when under water or in space. This suggests the presence of electromagnetic scalar waves (Tesla waves) which can penetrate all forms of matter.
Faster mutation does not speed things up
In the 24 months before December 2021, pre-Omicron SARS-CoV-2 accumulated around 20 mutations per year. From January 2022 to mid-2025, Omicron accumulated approximately 25 per year. Despite this 25% increase in mutation speed, the waves kept arriving on the same seasonal schedule.
This is because mutation is an illusion. Sequences are downstream of disease states and disease states vary strictly with the seasons. See above.
Hospitals are full of virus aerosols year-round but infections still peak and fall
An AI engine could not find an experiment that claims this and so this is likely to be an incorrect inference drawn from an unproven theory.
Viruses have not been shown to exist in hospitals or anywhere else.
Spread skips regions
Each variant of SARS-CoV-2 spread country to country following the same seasonal susceptibility patterns as influenza. Large regions were skipped entirely in early waves e.g. Eastern Europe in spring 2020 and UK, Portugal, Ireland (i.e., the west of Western Europe) in spring 2021.
This is further support for the idea that the cause of influenza comes directly from the atmosphere. We need a cause which can, at the same time coordinate a disease over a large area but which may also have definite boundaries. An outbreak may cover an area the size of a cruise liner or a whole continent.
Weather systems are a good candidate for this, being organised along the lines of cyclonic vortex structures. Such structures may cover a continent or may be focused down to something a few yards across.
The susceptibility model
The model that makes sense of these observations is one where infectious agents are necessary but not sufficient. The timing of illness must be due to something else.
Infectious agents have not been shown to exist and so cannot be considered necessary.
There must be a third factor that:
Peaks once each season (always in autumn and winter and not always in spring and summer)
Affects only a fraction of the population each season
Can spare certain regions entirely in any given wave
Is capable of synchronising illness peaks across hospitals and communities
Yes, at last we have some agreement.
The necessary factor is some sort of disturbance in the Earth’s magnetic field. Supporting evidence is that influenza outbreaks are sometimes synchronous along lines of latitude and exhibit a general movement from tropics to poles in winter. Exposure to man made EM disturbances such as 5G causes influenza in trials and the initial outbreaks of covid correlated strongly with the rollout of 5G in Wuhan, Italy and New York.
The evidence excludes certain possibilities:
Other environmental factors like electromagnetic or space weather effects have cyclical differences but these track annually, not quarterly.
No. Big mistake here to exclude space weather effects. We have:
Seasonal effects of the magnetic field including those in summer, spring and fall
The chart shows a clear correlation between the Kp index of the Earth’s magnetic field and covid cases. Covid cases in red with Earth’s magnetic field in blue (trendline in black)
This chart from Nasirpour et al. shows a clear association between pandemics and solar activity.
The immune clock
Our immune systems change with the seasons. Gene expression studies show clear and consistent shifts in immune pathways across the year – with January and July as polar opposites and transitional patterns in April and October. These changes are not subtle – nearly a quarter of genes are affected. The pattern is the opposite in Europe to Australia. In some cases, the winter and summer immune profiles are as different as those seen in entirely distinct disease states.
If there are no viruses then there is no immune system as there is nothing to be immune to!
So what are they measuring?
What is being measured is some parameters of the regulatory system and these are seen to vary across the seasons. The system gets its cues from external sources such as light, temperature and electromagnetic events. Biomarkers have been seen to vary according to the phase of the moon but nobody believes that this is because of gravity or even moonlight and so the only remaining candidate is an electromagnetic signal. See: Magnetic flux ropesThe Shnoll EffectBirth date, lifespan and diseaseFrank BrownGiorgio Piccardi
Now, as gene expression is involved, then anything involving gene expression is involved and that means almost anything.
The ‘immune system’ is regulatory in nature and phenomena such as the coordination of symptoms and sustained high temperature in influenza are surely managed by such a system as opposed to a small viral particle.
Disease is therefore a problem of regulation and regulation is receptive to cosmic rhythms: Frank Brown
Susceptibility and dose
Whatever the factors are that lead to susceptibility they must overcome the mucus barrier of the respiratory tract which is normally impenetrable to viruses.
No, because no virus is involved. Disease is the direct result of atmospheric disturbances on the regulatory system. No material substance is involved.
There are three main explanations for the seasonal cycling seen in human immune gene expression:
The viral mutation model. Immune cycling is downstream of viral exposure. Seasonal waves of infection, with everyone exposed to airborne viruses, drive immune activation, while gene expression shifts reflect that exposure.
Alternatively, gene expression, ‘mutation’ and ‘immune evolution’ are all downstream effects of atmospheric disturbances which are slightly different each year. Affected individuals adapt to specific disturbances and produce different sequencing results thereby creating the illusion of mutation. The illusion of immunity is similarly created as, having adapted to a stimulus, they will likely not succumb to a similar stimulus the following year.
Immunity studies can, to some extent, be ‘repurposed’.
Innate biological rhythm. Each person’s immune system runs on a built‑in annual clock, independent of environment or exposure.
Environmental entrainment. Immune function responds gradually to sustained environmental inputs – atmospheric, electro-magnetic or otherwise – which vary by season and location.
This one is tempting I will admit but I think that it is again incorrect; there are no built-in clocks in the human body and even the idea of ‘entrainment’ is highly doubtful.
Influenza outbreaks conform to a characteristic pattern with narrow peaks which occur close to the winter solstice but which vary in their timing from year. Such variation is not characteristic of entrainment where we would expect very precise timings with any variation taking several annual cycles to develop.
There is disease following tornadoes and earthquakes, both associated with electromagnetic output. There are associations between sferics and headaches and there are almost instantaneous outbreaks associated with humidity changes or the rollout of cell-phone technologies. These rapid responses to randomly timed exposure show that population entrainment is certainly not necessary for the production of disease.
What seems to happen is that our bodies know approximately what sort of rhythm is required and will pick one from the available sources as a timekeeper. Speeding up the metabolism of laboratory animals will not speed up the timekeeping and so the time keeping does not arise from any metabolic process.
Our bodies seem to be able to select and receive any desired rhythm from the environment but unable to produce it internally. See: Frank Brown
The fact that people in Australia have the opposite cycle of immune gene expression indicates that the rhythm is not innate.
Told you so!
Experiments on students in deep underground caves show that they can sustain circadian rhythms with no exposure to light and this has led scientists to think that the rhythm is innate. However, the experiences of astronauts and submarine crew suggest that some influence is still reaching them somehow.
The electromagnetic scalar waves as described by Tesla are said to be capable of penetrating water or rock and so these must be considered as a prime candidate for transmission of environmental or maybe ‘cosmic’ information.
Note that sferics are said to be measurable thousands of kilometres away from the source with little to no attenuation. This is a defining characteristic of a Tesla wave.
This lack of attenuation can only be achieved if the energy of the pulse is strictly contained in a finite footprint. This will clearly result in exposure to only a certain percentage of the population and will confine such exposure to limited geographical regions at any one time.
However, it could well be something we simply have not measured e.g. a seasonal atmospheric phenomenon that is invisible to our current tools.
Yes. This is the way to go.
Contemporary physics is unable to describe biological systems despite the best efforts of both physicists and biologists. Moreover, there are so many anomalies and unexplained phenomena in the world that we must conclude that physics cannot even explain physics!
This incompleteness is important. We must expect to find things that we cannot explain in conventional terms and must not panic too much. The temptation to ignore anomalies must be resisted as must the temptation to use terms such as ‘pseudo-science’ for almost anything outside of orthodoxy. Arguments such as “This cannot be true because there is no physical explanation” are now invalid.
This raises the salience of experimental observations and emphasises the importance of epidemiology. The study of pure correlation is science without a mechanism, it enables us to uncover some aspect of truth even when we have little to no idea of how things work.
Revealing the relationship between solar activity and COVID-19 and forecasting of possible future viruses using multi-step autoregression (MSAR) – Nasirpour et al. https://pubmed.ncbi.nlm.nih.gov/33725302/
The differences between a system of measurement, an abstract mathematical theory of physics and actual reality are described along with some of the consequences. Fundamental constants and laws of nature are, in reality, artefacts of the mathematical theory, not fundamental, not part of reality and not even constant. A Fundamental Law of Nature is best described as a ‘non-negotiable hypothesis within a specific theoretical system‘. The gravitational constant is not a fundamental constant of nature and is not constant.
AI summary
I asked an AI engine “What is the relationship between a ‘system of measurements’ and a mathematical theory of physics?”
A mathematical theory of physics is an abstract framework that uses the rules of mathematics to describe and predict natural phenomena, while a system of measurement provides the concrete, standardized units (like meters or kilograms) to link these abstract mathematical quantities to empirical, real-world observations and experiments.
AI: The relationship can be broken down into several key points:
Mathematics as the Language of Physics: Mathematical theories express physical laws and principles in a precise, universal, and internally consistent language, independent of any specific unit system. For example, Newton’s second law is universally F=ma, regardless of whether you use SI or Imperial units.
Measurement for Validation and Quantification: Measurement is how a physical theory is tested against reality. Experimental results, expressed as numerical values combined with specific units, are compared with the predictions of the mathematical model to determine its validity and accuracy.
Systems of Units Provide Standards: A system of measurement (like the International System of Units, or SI) provides a set of defined, internationally accepted base units (e.g., the meter for length, the second for time) that ensure consistency and reproducibility of experiments across different locations and times.
Arbitrary vs. Universal: While the underlying mathematical laws are universal, the choice of a specific unit system (SI, Imperial, etc.) is conventional and, to some extent, arbitrary. The mathematical structure of the theory remains the same; only the numerical values and conversion factors change when switching between systems (e.g., an inch is defined in terms of a meter).
In essence, the mathematical theory provides the structure, and the system of measurements provides the “calibration” that allows the theory to make testable, numerical predictions about real-world phenomena. Without a system of measurements, a physics theory remains a set of abstract mathematical rules; without the mathematical theory, a collection of measurements lacks predictive power and underlying explanatory principles
This is a good summary of the relationship but with a few issues and internal inconsistencies that need discussing.
Arbitrary vs. universal
From the AI summary we have:
While the underlying mathematical laws are universal, the choice of a specific unit system (SI, Imperial, etc.) is conventional and, to some extent, arbitrary.
For example, Newton’s second law is universally F=ma, regardless of whether you use SI or Imperial units.
What is meant by “While the underlying mathematical laws are universal“? Newton’s F=ma has a strong contender in the form of the Laws of Relativity and so cannot be considered to be either fundamental or universal, but instead just one of several competing abstract theories.
The abstract ‘law’ may be ‘universally F=ma‘ but what is meant by a ‘law’.
We have from the above “Mathematical theories express physical laws and principles in a precise, universal, and internally consistent language, ..”. Ok so now a mathematical theory is not the same as a Law but instead is only expressing a law. This is a contradiction of: “While the underlying mathematical laws are universal“. So again we ask: “What is a ‘law’?”, or maybe “What is a ‘mathematical law’?”.
The language is confused and ambiguous and stems from the desire to claim that reality is somehow mathematical in nature and that mathematics is not only synonymous with reality, but that the current fashionable abstract theory is a reliable description of the Fundamental Nature of Reality.
Physicists seem keen to distinguish themselves from engineers by constantly claiming ‘fundamental’ quantities, forces and relationships. They seem not satisfied with making half-decent predictions concerning laboratory experiments but want to convince themselves that they have somehow described the true nature of reality in terms of a few chosen mathematical equations.
Organising the system
The relationships between Reality, the System of Measurement and the Mathematical theoretical frame work is shown in the diagram below and hopefully brings some clarity to the situation.
Note the absence of anything definite (let alone fundamental) in the Reality box
Reality is accessed via measurements only (not mathematics)
The mathematical framework is two steps removed from Reality
Mathematical quantities are not measurements
Mathematical quantities are not Reality
Mathematical equations describe relationships between the measurements
Almost all the ‘work’ is within the mathematical domain
There is a big mismatch between the fundamentals of the measuring system and the fundamentals of the mathematical system
Measurements are meaningless by themselves and need to be interpreted via the mathematical framework
We need to talk..
The limitations of measurement
What is it we can actually measure directly? The answer is “Not much and most of them are listed above”.
Note that not even ‘time’ is not directly measurable as the ‘time’ measured by two atomic clocks will vary during an eclipse (Sheldrake’s TED talk), as will that measured by a pendulum. The quantity of ‘time’ is therefore an abstract theoretical quantity that is somehow inferred from the behaviour of such machines.
‘Speed’, similarly is not directly measurable. We need to measure the position of an object at two different ‘times’ and this will enable us to calculate the average speed over the interval. This is therefore not a fundamental, but a derived concept. Similarly, acceleration cannot be measured directly as it involves a change in speed.
Time and again we read, for example: “The gravitational constant has been measured accurately to many decimal places..“, but again, a fundamental constant is part of the mathematical domain and is therefore never measured, but inferred. Fundamental constants such as this are now easily seen to be, not fundamental quantities of Nature, nor even of the measurement system, but to be derived quantities of the mathematical theoretical framework only.
Mismatch between ‘fundamentals’
We can only measure very basic quantities such as ‘length’ and ‘position’ and even have to do some theoretical calculations to measure ‘time’. The ‘measurables’ of the system are very limited but we must constrain ourselves to using these if we are to obtain an objective characterisation of Reality.
However, the ‘fundamentals’ of the mathematical model are in abundance. We have mass, time, forces and all manner of fundamental constants to use as a basis for a complex theoretical framework.
Note that these are precisely the quantities that physicists claim as the fundamentals of actual reality, but also note that they derive immediately from the fundamentals of the measuring system, not actual reality. Note also that none of these quantities themselves are directly measurable! There is a therefore major mismatch between what we measure and what we theorise as fundamental to the theoretical model.
We need, therefore, some sort of interpretive system interposed between the measurement system and the theoretical system to make sense of the measurements. When does a set of measurements of length etc. translate to a ‘mass’ and when does it translate to a measurement of ‘time’ etc.?
Such interpretation is necessarily part of the theoretical model and is dependent upon it. A different theoretical model requires a different interpretive procedure to translate measurements to fundamentals. The ‘measurable’ quantities are not going to change and so the so called fundamentals of reality are really just artefacts of the abstract mathematical theoretical framework. A new framework will necessarily result in a new set of fundamentals.
What is a Law of Nature?
Newtons 2nd Law of Motion is invariably described as Fundamental Law of Nature:
Force = mass x acceleration
We can now easily see what this means. None of these quantities are part of our measurement system, being not directly measurable, and nor are they part of Reality as far as we know, but instead they are all part of the abstract theoretical framework.
The quantities of the measurement system are just point measurements and have no real meaning by themselves but when somehow translated into the theoretical framework we expect to see meaningful relationships between the quantities which will:
Allow the formulation of quantitative ‘predictions’ concerning further measurements to validate the model
Enable some sort of ‘understanding’, i.e. some representation of reality which sits comfortably within the human cognitive system
Note that only a finite number of measurements have ever been made of reality and even then they were made with limited techniques and primitive equipment. Note also that the theoretical framework ultimately derives from such limited measurements and is therefore necessarily constrained by such limitations.
Note also that if the theoretical framework itself changes then so do the relationships; relativistic mass is different from Newtonian mass for example.
Definition: A Fundamental Law of Nature can now be described as a ‘non-negotiable hypothesis within a specific theoretical system‘ and that is all.
The relationship between a Law of Nature and Actual Nature is always via a limited system of measurements which never directly measures any of the Fundamental Properties of the theory itself.
Different ways of measuring?
We hear often that there are now better (different) ways of measuring various quantities such as mass, gravitational constant, permittivity of space, speed of light etc. but this is deceptive language aimed at making it seem that things are improving all the time.
The idea that there are two different ways of measuring mass, for example, is just nonsense. Mass is not a measurable quantity and isn’t even part of the system of measurement. Mass is a theoretical interpreted quantity and therefore not susceptible to measurement.
The way that scientists use language gives the impression that mass is part of Reality, that there is something there to be be measured and that successive ‘refinements’ to measurement techniques are all that is needed to get an improved value for its magnitude.
The idea that there is something absolute to be measured if only we knew how, lends respectability to what might be termed ‘goal oriented’ refinements to experimental techniques. If you really think that there is something there to be measured then both the attainment of greater agreement between experiments and the increasing ‘accuracy’ of such results leads inevitably to the conclusion that you are getting nearer the ‘real’ answer.
In particular, if successive refinements seem to be converging towards a specific value, then this value will assumed to be the true goal regardless of the fact that it is never actually attained.
So what?
The calculated values of the gravitational constant vary considerably over time and according to a repeated pattern. It s clearly not constant and so there is something wrong with the theory and the theory should be ditched.
However, the idea that Big G is a fundamental of Nature itself as opposed to a derived value in an abstract theoretical system, means that physicists are loathe to admit that it is the theory that is wrong and instead blame the measurement system:
Now scientists have found that the measured G values oscillate over time like a sine wave with a period of 5.9 years. It’s not G itself that is varying by this much, they propose, but more likely something else is affecting the measurements. – Zyga
This is odd since the measurements themselves are dependent upon technology and it is the technology that has been improving steadily over the years. We are being asked to believe that the technology is not only wrong but is wrong according to regular pattern.
The idea that the equations of Newton are not theory but immutable features of physical reality fact is impeding scientific progress.
π
Pi is of course not a fundamental constant of nature but a fundamental constant of geometry; the two are not the same.
Special and general relativity
The theories of relativity are clear examples of confusion between theory and reality. A simple abstract mathematical idea, that of a coordinate system soon becomes conflated with actual reality and ends up being regarded as synonymous with physical space-time, even in the absence of sufficient support from the measurement system itself: Einstein’s relativity vs. actual reality
Summary
The tendency to conflate an abstract mathematical theory with an actual physical process seems to be irresistible, but nevertheless leads to severe interpretive problems. The fashion of the day is framed as a fundamental law or equally fundamental constant and scientific progress is immediately impeded by the requirement to express any new idea within the self-imposed constraints. Nothing that is declared ‘fundamental’ can be challenged without cries of ‘pseudo-science’!
Again, the ‘fundamentals’ of nature are nothing of the sort but are instead the bases of an abstract mathematical system which tries to give meaning to a finite number of measurements. These measurements in turn derive from severely limited measurement techniques with the hope that they somehow represent the entire of Reality.
An attempt is made to understand Einstein’s theories of relativity, particularly with respect to the central idea of an inertial frame of reference. Available descriptions are confusing and contradictory with definitions of the basic concepts either ambiguous or absent. Einstein himself voiced similar concerns. Some of Einstein’s fundamental errors are pointed out and alternative ideas proposed. The experimental results that are claimed to be explained by the theory of relativity are insufficient to prove the theory of relativity and in many cases, alternative explanations are available.
Inertial frames of reference
The idea of an inertial reference frame is key to Einstein’s theories of relativity, both ‘special’ and ‘general’. It follows that:
If we can’t understand inertial frames of reference then we can’t understand relativity
If a text doesn’t explain inertial frames properly then it hasn’t explained relativity
If reference frames have no consistent, unambiguous definition then relativity is likewise undefined
We take Wikipedia as a respected source of information on this and try to understand the main ideas.
What is a ‘frame of reference’?
In physics and astronomy, a frame of reference (or reference frame) is an abstract coordinate system, whose origin, orientation, and scale have been specified in physical space. It is based on a set of reference points, defined as geometric points whose position is identified both mathematically (with numerical coordinate values) and physically (signalled by conventional markers). – Wikipedia
So a frame of reference is just a coordinate system and as such we can use it to define such a thing as ‘position’. If we now integrate the concept of ‘time’ somehow, we can define the change of position over time and call it ‘movement’ or ‘velocity’.
‘Velocity’ is the rate of change in position respect to a specified coordinate system and agreed time metric.
Likewise we can define the concept of ‘acceleration’ as the rate of change in velocity with respect to a specific coordinate system and agreed time metric.
Coordinate systems (frames of reference) are described as frameworks for the specification of position, velocity and acceleration and that is all.
Conversely, if we are to describe such things as position, velocity and acceleration, then the framework with respect to which they are defined is deserving of the term ‘frame of reference’.
To reiterate: All position and movement is defined with respect to frame of reference (coordinate system).
A first inconsistency?
In the same paragraph, Wikipedia goes on to say:
An important special case is that of an inertial reference frame, a stationary or uniformly moving frame. – Wikipedia
Ouch!
What is a ‘stationary or uniformly moving frame’? Such uniform movement (or otherwise) is only defined with respect to some coordinate system (reference frame), but which one?
We are talking here about the movement of a reference frame itself, not objects within it. Such a movement is nevertheless ‘movement’ and hence must be measured in some coordinate system in order to have any meaning at all. The moving framework cannot be described with reference to itself (it would always be stationary!) and so some other ‘higher’ or ‘universal'(?) framework is assumed here but not explicitly stated.
I would suggest that the reason such a framework is not discussed is because the eventual aim is to give justification to the idea, from Einstein, that no coordinate system is preferred over any other; everything is ‘relative’.
What is an inertial frame of reference?
The abstract idea of a frame of reference was introduced above, but Wikipedia has a whole separate entry now on the definition of a specifically ‘inertial’ frame of reference:
An inertial reference frame is a frame of reference in which Newton’s first law of motion holds true without any corrections. This means that an object either remains at rest or continues to move with constant velocity in a straight line unless an external force acts on it. In such a frame, there are no fictitious or pseudo forces required to explain the motion of objects. – Wikipedia
Compare with the first definition above, where an inertial reference frame is described as a “stationary or uniformly moving frame”.
The first definition is in terms of coordinates, of position, distance, velocity and acceleration (change of velocity over time) but the second is in terms of Newton’s laws of physical motion.
These two concepts are worlds apart and should never, ever, be assigned to the same terminology. There is no concept of ‘force’ within a coordinate system, nor of an ‘object’, ‘inertia’ or even ‘mass’; these are separate entities that need their own definitions.
Note that the first definition of an inertial frame contains no mention of the word ‘inertia’ – and so why refer to it as ‘inertial’? This tends to conflate the idea of inertia with that of acceleration. They are obviously different entities but later descriptions of relativity require that they be effectively the same thing, and so describing a stationary frame as ‘inertial’ makes it a practical certainty that such a conclusion should eventually be reached.
Again, from the same article in Wikipedia:
Inertial reference frames are either at rest or move with constant velocity relative to one another. – Wikipedia
What does this mean? Two possibilities:
This is a definition. Inertial frames are now defined as those that are at rest relative to one another
This is a theoretical consequence of the definition in terms of Newton’s first law.
In all likelihood, the second possibility is intended, but it needs some justification. The attempt here is to define the basis of special or general relativity and so accuracy is required.
What does sit mean to: “move with a constant velocity relative to one another“? Remember that velocity is always defined with respect to the elements of a coordinate system and so the relevant coordinate system here should be specified. We can guess here that each coordinate system is to be regarded as an element of the other but this has the consequence that each system ultimately contains a reference to itself!
If the only qualifications of an inertial system are those to do with relative velocity, then why are they described as ‘inertial’?
This is a perfect example of definition creep which seems ubiquitous in attempts to describe relativity. Descriptions start off talking about velocity and acceleration, i.e. events within a pure coordinate system, but soon turn to forces and inertia and after a while the reader becomes hypnotised into believing the central tenets of the theory with no real justification at all.
Special relativity
From the Wikipedia entry on special relativity:
In physics, the special theory of relativity, or special relativity for short, is a scientific theory of the relationship between space and time. In Albert Einstein’s 1905 paper, “On the Electrodynamics of Moving Bodies”, the theory is presented as being based on just two postulates:
The laws of physics are invariant (identical) in all inertial frames of reference (that is, frames of reference with no acceleration). This is known as the principle of relativity.
The speed of light in vacuum is the same for all observers, regardless of the motion of light source or observer. This is known as the principle of light constancy, or the principle of light speed invariance.
Read again: “The laws of physics are invariant (identical) in all inertial frames of reference (that is, frames of reference with no acceleration).“
So we are now describing inertial frames as those with no acceleration again. Fine, but acceleration with respect to what exactly? If this question cannot be answered then there is no acceptable definition of special relativity.
Note that this definition of inertial frames is both convenient and necessary here, as if we accept the alternative definition of a frame of reference where Newton’s law holds true then we have something like: “The laws of physics are invariant (identical) in all frames of reference where Newton’s first law holds“. This is not entirely vacuous but note that it cuts out the idea of acceleration altogether and if all we are concerned about is Newton’s law then we get: “Newton’s first law holds in all frames of reference where Newton’s first law holds“. This is vacuous now and nothing of any meaning has been said about Newton’s law, gravity or acceleration.
From the same Wikipedia article:
In relativity theory, ‘proper acceleration’ is the physical acceleration (i.e., measurable acceleration as by an accelerometer) experienced by an object. It is thus acceleration relative to a free-fall, or inertial, observer who is momentarily at rest relative to the object being measured.
And there you have it! The transformation is complete! We have moved seamlessly from a definition of acceleration that everybody understands to one that is convenient for the theory of relativity.
Accelerometers do not measure acceleration in the conventional sense of the word but instead record the displacement of a weight owing to either inertial or gravitational forces.
We started with ‘acceleration’ meaning a change of velocity within a specific coordinate system and ended up with a definition in terms of forces, inertia and gravitational attraction. We have now seemingly described inertial reference frames without the need of velocity or position, or in other words, without any of the qualities that identify a reference frame as a coordinate system.
A non-accelerating frame has become synonymous with a force-free frame simply by linguistic trickery.
Einstein’s concerns
This conflation of ‘inertial’, ‘non-accelerating’, ‘force free’ and ‘Newtonian’ has not gone unnoticed:
All frames of reference with zero acceleration are in a state of constant rectilinear motion (straight-line motion) with respect to one another. In such a frame, an object with zero net force acting on it, is perceived to move with a constant velocity, or, equivalently, Newton’s first law of motion holds. – Wikipedia
What is meant by ‘zero acceleration’ in the above?
If you do not have an absolute frame of reference then how can you ever say that something is moving with constant velocity (zero acceleration)? You clearly can’t and so they are trying to define constant velocity as relative to other frames that are also moving with constant velocity relative to each other. This is gibberish.
Einstein himself was aware of the problem:
The weakness of the principle of inertia lies in this, that it involves an argument in a circle: a mass moves without acceleration if it is sufficiently far from other bodies; we know that it is sufficiently far from other bodies only by the fact that it moves without acceleration.
— Albert Einstein: The Meaning of Relativity, p. 58
Zero acceleration is now defined, not with reference to a coordinate system but by the lack of gravitational attraction from other bodies.
Example: Two falling weights
Inertial reference frames are either at rest or move with constant velocity relative to one another. – Wikipedia
As an example consider two astronauts positioned a thousand miles above the Earth, a hundred miles apart and falling freely towards the plant’s surface.
A stationary observer at the surface will see these astronauts accelerating with respect to himself and also with respect to each other as they converge. Furthermore, the astronauts see themselves as accelerating towards each other and towards the Earth.
By the discussion above, we cannot have all of these as being stationary within inertial frames at the same time – so which ones are inertial and which ones are not? How do we tell?
Which of these bodies is moving ‘without acceleration‘? Physicists will no doubt say: “The freely falling bodies are in an inertial frame because they experience no force and Newton’s first law holds”, but the question was about acceleration and replying in terms of forces like this pretty much assumes the conclusion that Einstein was trying to reach.
Why is all this happening?
Newton’s theory of gravitation is based upon the ideas of mass, gravity, force, inertia and acceleration. However, whilst it is clear that there is some relationship between these quantities, it isn’t quite clear precisely what this is and nor is there any basic mechanism described for the phenomenon of either inertia or gravitational attraction.
Einstein has seen this and conceived the idea that inertia and gravity are one and the same thing but viewed according to different coordinate systems. The acceleration caused by gravity is now nothing more than the acceleration of a body perceived according to an accelerating frame of reference, nothing more and nothing less.
Einstein has thereby obviated the need to describe a mechanism for gravity by simply relabelling it as ‘acceleration’. He has declined to provide a physical mechanism for gravity and instead reframed it a, very simply, a change in position relative to something else! An observation (measurement) has been elevated to the status of a physical law.
The equivalence principle
A version of the equivalence principle consistent with special relativity was introduced by Albert Einstein in 1907, when he observed that identical physical laws are observed in two systems, one subject to a constant gravitational field causing acceleration and the other subject to constant acceleration, like a rocket far from any gravitational field. Since the physical laws are the same, Einstein assumed the gravitational field and the acceleration were “physically equivalent”. – Wikipedia
Einstein stated this hypothesis by saying he would:
“…assume the complete physical equivalence of a gravitational field and a corresponding acceleration of the reference system.”
— Einstein, 1907
This is obviously two big mistakes rolled into one short phrase.
First, Albert refers to an “acceleration of the reference system“, but again we can ask: “With respect to what?”
Second, the phrase “complete physical equivalence” is surely a massive overreach? The text above claims that Einstein: “observed that identical physical laws are observed in two systems.. like a rocket far from any gravitational field.”. Really? How did he observe this? A complete characterisation of the laws of physics is not available at the present and was not available in 1907. There is therefore no way of testing for complete physical equivalence. This is a meaningless phrase.
The available laws at the time were Newton’s laws of gravity and since these were proving to be inadequate, alternatives should have been considered. Instead what has happened is that Einstein has tried to ‘fix’ the paradoxes of Newton by the simple means of equating all acceleration with gravitational acceleration. By this means he can do without any explanation for a physical mechanism of gravity and just say that it is ‘acceleration of the reference system’.
We can say that no additional physics is being proposed here, merely the same Newtonian laws but described from different perspectives. Indeed, the proposed equivalence of acceleration and gravity actually stifles further enquiry into the topic as there is nothing further to research, with any further anomaly resulting in attempted explanations by manipulation of the reference system only.
Out of necessity now, Einstein will go on to explain the laws of physics purely in terms of outlandish frames of reference, resulting in the concept of 4-dimensional curved space time with shrinking lengths and clocks that run a different rates.
A model of the fundamental nature of space and time has arisen purely from considerations of gravity and acceleration, and much of that mere conjecture. It is no surprise then that the new theory says nothing about the forces of electromagnetism and is unlikely to do so for the foreseeable future.
The gravitational field
The conflation of a gravitational field with mere acceleration effectively rules out the investigation of any characteristic of a gravitational field that is not relevant to acceleration; the theoretical framework is simply not able to express such properties.
Gravitation is now synonymous with acceleration and has no other function than to move objects and no other measurable or theoretical properties other than those pertaining to with acceleration.
This is clear bunk. We have, in a gravitational field, several properties which are likely to have effects other than pure acceleration:
A diminishing of strength according to an inverse square law
A divergence of the ‘field lines’
A reduction of curvature of the isobars according to an inverse square law
Some fine grained structure arising from the atomic structure of the Earth
A directional accelerative propensity towards the Earth
An aligning effect on a ship’s gyroscopic compass
A mechanism for inertia
Some other global structure aside from a simple ‘sink’ (e.g. a vortex structure)
Something to explain the precession of the perihelion of Mercury
Some of these are already measurable and others may be measurable in the future or calculable from other measurables. To say that they don’t exist or aren’t relevant is positively deranged and for a theoretical framework which rules these out to survive for a whole century is just inexplicable.
Gravitational attraction is not just acceleration, there is a mechanism producing such an acceleration which needs explaining. Indeed, acceleration itself is not a mechanism but the resultant effect of such a mechanism, whatever that may be.
Example: elevator gravity
An example below from the Wikipedia entry on General relativity:
According to general relativity, objects in a gravitational field behave similarly to objects within an accelerating enclosure. For example, an observer will see a ball fall the same way in a rocket (left) as it does on Earth (right), provided that the acceleration of the rocket is equal to 9.8 m/s2 (the acceleration due to gravity on the surface of the Earth). – Wikipedia
So now objects in a gravitational field only behave similarly to objects within an accelerating enclosure, whereas before, the laws of physics were identical.
What is an accelerating enclosure accelerating relative to? If the rocket is at the surface of the Earth then it does not need to accelerate as the effects are already there from the gravitational field.
We are intended to imagine the rocket in space far away from any gravitational field. However, there is no such place in the universe and so no such experiment has been performed and never will be performed.
We have, from the same article:
..it is impossible to decide, by mapping the trajectory of bodies such as a dropped ball, whether the room is stationary in a gravitational field and the ball accelerating, or in free space aboard a rocket that is accelerating at a rate equal to that of the gravitational field versus the ball which upon release has nil acceleration.
This is pure conjecture. This is a thought experiment, the result has been assumed and a theory has been developed with no empirical data or foundational definitions.
There is no such thing as ‘free space’, the whole of space is permeated by a gravitational field. What is the meaning of: “stationary in a gravitational field“? Again, another use of the word ‘stationary’ without reference to a well-defined coordinate system.
What is the solution?
We can go on like this almost indefinitely but the fundamental problem always remains which is the lack of a well-defined coordinate system in which these events take place. Without this we have no way of defining acceleration or even velocity and since the whole point of the Theory of Relativity is to describe gravitational effects in terms such metrics, it can be regarded as a failure.
It is all very well to criticise something, but such comments will simply fall upon deaf ears unless some sort of alternative is at least suggested.
The ‘Inertial Field Theory’ (IFT)
The post: Gravity as an inertial field outlines an idea that gravity is in fact an ‘accelerating’ inertial field with mechanisms and characteristics of itself that explain the local movement of matter in the cosmos.
Consider that:
A gravitational field has fine grained structure on the scale of the atom
A horizontal component is present
The local structure provides for inertial effects
An accelerative component provides for gravitational attraction via ‘movement’ of the inertial mechanism
The accelerative component derives from the global structure whether it be purely radial or vortex-like in nature
The idea of a ‘uniform’ gravitational field is probably bunk
We can now describe a plausible and at least consistent foundation for a theory of gravitation and provide answers for Einstein’s thought experiments.
What is an ‘inertial frame’?
If a gravitational field has both horizontal and vertical components that are roughly isotropic then we may use this as as the basis of an actual physical coordinate system. The system is uniform only locally and theoretically varies from point to point across the whole universe.
This aspect of the gravitational field is insensitive to ‘uniform’ motion of matter but has a certain accelerative resistance thereby providing for both inertia and gravitational acceleration.
The gravitational field has a fine grained structure of a certain scale and this may be used as a basis for a metric of length and hence velocity and thereby acceleration. We therefore have a coordinate system that is:
Highly local – not global
‘Absolute’ in a sense as opposed to arbitrary or relative
Defined by characteristic physical processes, whatever they may be
Responsible for both defining and implementing the laws of gravity and inertia
Free-falling objects move according to local field conditions only and can be said to be following an ‘inertially straight’ path. This is not a geodesic in space-time as there is no need to suppose a distinct space-time as separate from the local field. This is not necessarily the shortest distance between any two points but is a path determined by local field interaction at every point on the path.
Free falling objects in close proximity form an equivalence class of objects which may be said to be in ‘uniform motion’ relative to each other. Their velocities are all constant relative to the local inertial field and constant relative to each other by definition.
There is no need for an abstract coordinate system anywhere as the idea, maybe surprisingly, doesn’t make any sense. Physical objects are moved around by physical field phenomena and that is all. Any idea of a metric must come from emergent properties of the field characteristics itself. In stark contrast to Einstein’s approach where coordinates and ‘space’ are ‘fundamental’, we have a system where the physical gravitational field is the fundamental and any coordinate or metric is defined in terms of local field characteristics or their effect on ‘matter’.
The field forms an inward spiralling vortex system around the Earth where the rotation at the Earth’s surface is synchronous with the Earth’s rotation, thereby forming a ‘gravitational-inertial layer’ at the surface of the Earth which provides for laboratory conditions. Almost all experiments performed by physicists have been within this layer, thereby giving the impression that such conditions are representative of the cosmos as a whole and that all discoveries have been ‘fundamental’ and universal. The Michelson-Morley experiment was performed within this layer.
The horizontal components of the field give rise to inertia and centrifugal forces. Objects at the Earth’s surface can be said to be accelerating upwards relative to the Earth’s gravitational field, where such acceleration is relative to the downward accelerative component of the (physical) gravitational field.
Any experiment carried out in a free-falling rocket is nevertheless within a gravitational field somewhere and this field provides a physical reference frame for measurements, movement, acceleration and the behaviour of rotating bodies.
What would Einstein say?
I think Albert would approve, he was obviously trying to:
Remove the need for a global coordinate system
Define physical laws locally
Somehow unify gravity, inertia and acceleration
Explain the Michelson-Morley result
Explain rotational motion and centrifugal forces
Come to terms with his own discomfort with the foundational ideas
Unification of inertial and gravitational fields
The gravitational and inertial fields are different components of the same field:
Even in generally-covariant reformulations of these older theories, there will be an inertial field and a gravitational field existing side by side. The unification of these two fields into one inertio-gravitational field that splits differently into inertial and gravitational components in different coordinate systems (not necessarily associated with observers in different states of motion) is one of Einstein’s central achievements with general relativity – Michel Janssen
The motivation is good but the unnecessary introduction of different coordinate systems spoils the idea. The accelerative effect of the gravitational field is always present even if it is not measured. If an observer is freely falling towards Earth, they will not experience any accelerative effect from the gravitational field as they are moving along with the field acceleration. However, there must be some sort of mechanism producing this effect and that physical mechanism is not going to disappear just because the observer is moving along with it.
One idea might be that it is the radial convergence of the gravitational field lines towards the planet which produce such acceleration, in which case an observer can accelerate all they like towards the Earth but the field lines have their own ontology within the theoretical framework and are not going to vanish just because they are being ignored.
Another idea is that it is the ‘curvature’ of the field which produces such acceleration. This curvature diminishes with the inverse square of the distance from the Earth and so can be thought of as producing less acceleration the further out in orbit we are.
Some texts talk about a ‘uniform gravitational field’ in an attempt to simplify the ideas of special relativity, but if either of the above two hypotheses are true then there is no such thing as a ‘uniform gravitational field’, since the acceleration comes from phenomena that derive directly from the radial or curved nature of the field. Try to think that the centripetal effect of a tornado has nothing to do with the rotational nature of the wind! Try to simplify to a flat tornado!
The removal of a global frame of reference
After the development of General Relativity, Einstein wrote:
Why were another seven years required for the construction of the general theory of relativity? The main reason lies in the fact that it is not so easy to free oneself from the idea that co-ordinates must have an immediate metrical meaning
(Einstein, 1949, p. 67).
Einstein failed to do this:
As we will see .., the coordinates that Einstein actually used in his accounts of the twins and the bucket in the 1910s have essentially the same status as those in special relativity. They still have direct metrical significance and still identify and individuate space-time points uniquely. – Michel Janssen
The scheme that Einstein settled upon was to identify ‘space-time’ as representing a global and somewhat ‘absolute’ reference frame but at the same time to allow such a coordinate system to have a curved geometry and to allow such curvature to be produced by some physical (although unspecified) process involving something called ‘mass’.
Thinking about this in a quiet place, we realise that this is just a rephrasing of all the ambiguities and double-speak that plagued the early formulations of special relativity.
Properly handling accelerating frames does require some care, however. The difference between special and general relativity is that (1) In special relativity, all velocities are relative, but acceleration is absolute. (2) In general relativity, all motion is relative, whether inertial, accelerating, or rotating. To accommodate this difference, general relativity uses curved spacetime.
— Albert Einstein: The Meaning of Relativity, p. 58
The idea is ostensibly to use a coordinate system (reference frame) as a basis for defining acceleration as before, but the coupling of ‘mass’ to the geometry of space-time performs the same linguistic trickery as before and effectively re-defines an inertial frame by its propensity to accelerate an object. This is just a rehash of Newton’s force = mass x acceleration but with ‘Force’ replaced by ‘space-time curvature‘, ‘mass’ replaced with ‘the propensity to curve space-time‘ and acceleration with ‘movement caused by space-time curvature‘.
Again, no new physics has been produced and all we are left with is a more complicated way of looking at Newtonian gravitation.
Moreover, the formulation of acceleration as being something like the natural propensity of a mass to move through space-time effectively removes the need to provide any other explanation for such a phenomenon. A physical law is replaced with a ‘natural propensity‘. This is not a new physics but a way of avoiding doing any physics at all!
Example: a geo-stationary space station
Imagine a geo-stationary space station hovering above a laboratory on Earth. The relative velocity of the laboratory and station is zero. There is no relative movement, so are they both in the same inertial frame of reference or not?
Although there is no obvious relative acceleration I think that most physicists would say that they are in different inertial frames and the reason given would be that the station is in free-fall whereas the laboratory is not.
So although frames of reference are theoretically defined in terms of spatial acceleration, none of this really matters when it comes to actual examples and we find again that inertial frames are described in terms of what physicists imagine is happening in physical space.
How do we know that an orbiting station is in free-fall when it has no relative movement let alone acceleration? How do we know that conditions at the surface are different? Not by any observed acceleration between the laboratories that is for sure, but by the overall geometry of the situation and observed difference of behaviours of masses within each room.
Such behaviours are clearly independent of each other and decoupled from any relative acceleration that may exist between the laboratories. Experiments within each room unfold according to the local field conditions within that room and that is all. What does the idea of variable reference frames of reference add to all of this?
Special relativity as an engineering model
Special relativity is defined in the absence of gravity. For practical applications, it is a suitable model whenever gravity can be neglected. – Wikipedia
There is no place in the universe that is without gravity and so we can disregard special relativity as a reliable representation of actual reality. It is not a law of physics, it is not a law of nature and it is not a fundamental principle. It is at best a collection of useful rules of thumb that can be used to address specific physical problems.
As a theoretical framework it is riddled with ambiguities and deficiencies as we have seen and in particular it has failed to define either gravity or acceleration.
Even the idea that it can be used to perform useful calculations where gravity is negligible is surely a joke? How do we know if we can ignore gravity when gravity has not even been defined properly? The equivalence principle says that gravity is indistinguishable from acceleration and is therefore, along with acceleration, effectively unmeasurable and undefinable. We are therefore left asking: “What it is exactly, that can be neglected?”.
General relativity is no better and suffers the same fundamental problem which is that of defining acceleration, gravity, inertia, frames of reference and a global coordinate system.
Attempts to identify gravitational attraction with pure acceleration have failed and at the same time effectively prevent any further enquiry into the nature of the gravitational field, having given the impression that the problem has already been solved in terms of bendy space-time.
The Michelson-Morley experiment
It turns out that light is measured as having the same speed travelling with the Earth’s rotation or against it. This was a surprise at the time and is said to be the motivation behind the development of special relativity.
So how did Einstein solve the problem? Put simply, he just declared the result to be a fundamental principle of physics and manipulated everything else to fit the result that he wanted.
From the definition of special relativity:
2. The speed of light in vacuum is the same for all observers, regardless of the motion of light source or observer. This is known as the principle of light constancy, or the principle of light speed invariance.
This is just garbage, just a crude forcing of the result that was required based upon one experimental result only.
There is no such thing as an inert and empty vacuum as normally conceived since all areas of space are permeated by a gravitational-inertial field. Moreover, since it is precisely these fields that are of relevance here, this should be explicitly acknowledged rather than brushed under the carpet as ‘vacuum’.
One consideration is that the gravitational field at the surface of the Earth rotates with the Earth thereby providing a stable reference frame for the movement of both mass and light. However, the formulation of gravity as synonymous with acceleration effectively excludes this hypothesis from the model and leaves us bereft of any other means of explaining the experimental result apart from declaring a new principle of nature.
A principle is declared and not just for the local conditions in the Earthly laboratory, but for the whole of space everywhere and at all times!
Experimental evidence
Aficionados are adamant that there are many experiments that confirm the truth of the theories of relativity, to great precision. However, closer examination reveals things to be a little more complicated.
The precession of Mercury
The orbit of Mercury is elliptical, but the axes of such an ellipse are not static and rotate over time. This is contrary to the assumed action of a simply radial Newtonian force and needs some explanation.
The ‘solution’ from General relativity is to assume that gravitational effects do not propagate instantly through the space-time framework but do so at a finite speed: the speed of gravity: Wikipedia This allows calculations to be made that seem to explain the motion of the planet.
Note that again the term ‘space-time’ has moved from defining a mere coordinate system to becoming a complete, all-pervasive physical entity which is causal in directing events at a cosmic scale. It is responsible for moving around ‘mass’ through physical space and is in turn responsive to the presence of such mass, thereby altering its curvature.. in order to move such a mass!
John Wheeler summarises:
Matter tells spacetime how to curve, and curved spacetime tells matter how to move – John Wheeler
This should be a massive red flag. The language of causation is used but the causal chain is circular! How do you preserve your own sanity with such an attitude? How does the ‘telling’ happen? What is the mechanism please? How does anything happen at all?
Returning to the precession of Mercury, we need to do some actual calculations within the framework of general relativity in order to prove our point. It turns out that the calculations for the altered orbital were actually performed within the framework of parameterised post Newtonian formalism (Wikipedia).
This framework is in Newtonian in spirit, Newtonian in name and uses the very Newtonian concepts of:
Newtonian gravitational potential
Momentum
Angular momentum
Gravitational potential energy
Kinetic energy
Parameterised post Newtonian formalism is therefore a de facto extension of Newtonian physics. The Wiki post tries to squirm out of this by claiming that is a Newtonian approximation to general relativity, but if all of the computation requires Newtonian type quantities within a Newtonian framework, then what has been gained by calling it General Relativity?
The idea that effects travel through Einstein space-time at the speed of gravity (speed of light) is a MacGuffin employed to distract and give validation to the fashionable theory of the day. We could just as well have said that Newton’s gravity propagates at the speed of light and come up with exactly the same results using exactly the same post Newtonian formalism.
Once again, the theory of relativity is just a more complicated way of doing the same Newtonian physics.
The theory of general relativity is still not well-defined and so no amount of accurate predictions can confirm such a theory as: there is no theory!
Why have things gone so wrong?
Reading back through the post it is evident that the same themes crop up time and time again and that the same basic mistakes are responsible for leading the aspiring theorist astray. Einstein himself started off with good intentions but still thought in the same basic patterns and so ended up in the same blind alleys.
The mistakes arise from a few fundamental assumptions which seemed fine at the time but have proved to be crippling in the development of a consistent cosmology:
Error 1: Physics is downstream of mathematics
Almost all physicists believe this, but it just isn’t true. The idea of a reference frame upon which to hang physical events started out fine but we ended up with a space-time that was physical, curved, dynamic and ultimately causative. This seems inevitable in hindsight as physical reality must always somehow reference such a system in order to travel in a straight line for example and so the coordinate system ends up partaking of physical reality even if only passively.
The solution is to take observed physical events as the basis for a science and any apparent order in the form of a consistent coordinate system to be regarded as emergent from these observations.
Error 2: The world is not ‘Newtonian’
The Newtonian world consists of ‘objects’ moving around in space that is empty apart from a few gravitational forces emanating from those objects themselves. A ‘separation’ is built into reality of space, distance, force and object. Forces emanate from ‘matter’, matter takes prime place in the causal chain and matter is somehow aware of a separate coordinate system. Each element of reality is subject to different laws.
In terms of a solution from field physics, the cosmos consists solely of field interactions at every point in the cosmos, with matter, mass and forces constituting observable and measurable effects which, by virtue of their salience, attain an undeserved prominence in our cosmology. To regard such emergent effects as ‘fundamental’ will clearly result in failure.
Error 3: The innate properties of objects
Mass and inertia are held to be ‘innate’ properties of matter and this distortion percolates down even into relativity. The idea should be considered that both are emergent properties arising from the interaction between matter and field structures, rather than immutable properties of matter itself. This becomes evident in John Wheeler’s statement above where mass and space-time curvature are obviously precisely the same thing, but he can’t quite bring himself to say so for some reason.
Nobody regards ‘friction’ for example as an innate property of matter and so so why regard ‘inertia’ as an innate property of matter?
Error 4: Locality bias
The idea that an experimental result in a laboratory is somehow representative of physics at all points in the universe for all time is a clear bias.
Error 5: The fixation on causality
This is another Newtonian concept, that events proceed in a ‘causal’ chain from some original cause (Big Bang) to the complexity we see at the present. In reality, the entire cosmos evolves as a whole and any perceived ‘events’ are merely emergent and observable effects of such an evolution. To describe such events as ‘fundamental’ and such causal chains as controlled by ‘fundamental’ laws is misleading and again crippling in the formulation of a consistent cosmology.
As an example, consider Wheeler’s statement that “Matter tells spacetime how to curve, and curved spacetime tells matter how to move“. It is evident from this that matter and spacetime move in concert with each other and are effectively synonymous, but the conceptualisation of the two as fundamentally different entities necessitates some sort of physical coupling and the abstract idea of ‘causality’ has been roped in as yet another MacGuffin to cement over the cracks, with no mention of an actual physical mechanism. Such a thing is not thought necessary because the abstract idea of ‘causality’ is so readily accepted.
Error 6: Inability to assimilate an existing paradigm
An alternative to ‘causality’ had already been discovered in the form of the Navier-Stokes equations governing the flow of fluids and gases. Here, there are no separate objects as such to exert forces upon each other, and no distinct ‘events’ to delineate causality. Instead fluids and gases are treated as a continuum whose behaviour is in accordance with a set of partial differential equations. This is as about as far from intuitive as we can get, but nevertheless avoids all of the problems we are seeing. Reality evolves at each point in the continuum according to certain rules and that is all that happens. Any perceived order within the resulting activity is not a fundamental law but an emergent effect only.
Error 7: The Laws of physics are not ‘reality’ and are not fundamental
The laws of physics belong in the right hand column below. They are twice removed from reality and take the form of abstract mathematical equations constructed in order to explain a finite number of measurements derived from a limited number of physical processes. They are not fundamental in any way, shape or form.
Contrast this self-evident truth with the attitude of physicists who are prone to declaring almost any new theory going as ‘fundamental’. Note that Einstein’s framework for relativity started off as merely an abstract coordinate system but quickly morphed into an actual physical process that shaped the entire universe by its causative properties.
Error 8: Linguistic overloading of the term ‘straight line’
The term ‘straight line’ can have several meanings:
Geometrically straight – with reference to a coordinate system
Inertially straight – the unimpeded path of a mass through space
Electromagnetically straight – the path of a light beam
There is no reason that these should all be the same and no evidence that they are. Newton’s 2nd law is the assertion that 1 and 2 are equivalent but without reference to a specific coordinate system. Einstein was so keen on the idea that all 3 were equivalent that he allowed for a curved geometry in order that it be so. The reality is that neither mass nor light are moving through space along a coordinate system but instead moving through a gravitational field and driven only by local physical processes.
Energy conservation
The conservation of energy is widely held to be a fundamental principle of nature (of course it is!) However:
Energy as an abstract quantity is poorly defined
Many physicists will admit that it is not in fact conserved
Energy is frame-dependent in relativity and hence not absolute
In Newtonian physics it is relative to a reference frame which is fixed but undefined
No mechanism is provided for the transmutation of energy from one type to another
Consider two objects in space:
For example, if two objects are attracting each other in space through their gravitational field, the attraction force accelerates the objects, increasing their velocity, which converts their potential energy (gravity) into kinetic energy. – Wikipedia
We need a reference frame to describe acceleration, so imagine yourself as object A whilst object B accelerates towards you. You don’t feel yourself accelerating and you don’t perceive yourself as having potential energy or of converting it to kinetic energy. This immediately adds an asymmetry to the situation.
This is fine from the point of view of gravity and acceleration, but the claim here is that there is now some energy conversion, some physical process, happening at one place but not the other. Even this may be considered valid, but an observer at B will imagine the same situation but this time with the energy conversion happening at A. There is a disagreement as to what actual physical processes are taking place.
The doctrine of relativity will be fine with the velocity and acceleration disappearing at one observer as this is all frame dependent, but if the transmutation of potential to kinetic energy consists of some actual physical process then we are forced to concede that this physical process only ever happens in the other guy’s framework. This sounds like nonsense and so it probably is.
Physicists don’t notice this happening as they have no physical definition of ‘energy’ or energy ‘conversion’ and so have no requirement to say why it only seems to happen to somebody else. However, we do not need to specify a mechanism in order to suppose that one might exist, and that if it does exist, then it must exist in some ‘absolute’ sense if energy is to be transmuted.
To see what sort of mechanism might be in play we note that kinetic energy is really just velocity squared and ‘potential’ is just the position in a gravitational field. The conversion of potential to kinetic energy is now equivalent to that of a mass acquiring velocity within a gravitational field.
This is now an identical argument to the one above concerning acceleration under a gravitational field. There must be some mechanism by which this happens and it must be in effect locally to make objects move. It must therefore be in effect even in the rest frame of the observer, i.e. even when the observer appears to himself to be not accelerating.
The theory of relativity, then, seeks to explain away all mechanisms which may be dependent upon acceleration by simply pretending that they don’t exist or at least will vanish in an appropriate reference frame. This has the effect of limiting, rather than expanding, the number of phenomena that can be explained by such a theory.
Rotational movement
Newton put some water in a bucket, spun it on a rope and watched the water climb the sides of the bucket. He then spent decades arguing with Ernst Mach as to why this should happen, but without satisfactory resolution.
Einstein described what he thought was an equivalent situation but with a globe spinning in space:
Following Einstein’s (1914, pp. 1031–1032) lead, [..] we consider a globe, held together by non-gravitational forces, rotating with respect to the fixed stars, [..] In this case, the centrifugal forces, rather than giving the surface of the water in the bucket its tell-tale concave shape, make the globe bulge out at its equator. – Michel Janssen
Ouch! There is a big assumption here which is that centrifugal forces exist at the cosmic scale in the same way that they do in a laboratory within a strong gravitational field at the Earth’s surface . Observational evidence, however, shows that the bulge of a planet is not uniquely determined by its size, mass and rate of rotation. Our sun, for example has almost no bulge at the equator whilst our moon has a noticeable bulge but little rotation.
Gravitational fields are thought to have some inertial component even by Einstein and so it should be considered that the inertia experienced by Newton’s water could possibly arise from the fact that it is being dragged trough the inertial field of the Earth’s gravity and that it is this inertial drag that gives rise to the centrifugal forces causing the water to climb the sides of the bucket. The water may have its own gravitational field but the Earth’s field dominates the experiment whilst the bucket spins within it.
The situation of a planet in space is completely different. The Earth is not spinning within a strong enclosing field, but its own field spins with it and again dominates proceedings. There is no reason at all to suppose that centrifugal forces will arise during this situation and no reason to connect the rate of spin with an equatorial bulge.
The whole system forms a spinning vortex field and the resulting activity conforms to the laws and patterns of vortex physics.; see the barred galaxy depicted below. The field spirals inwards in a manner similar to a hurricane before stabilising at a fixed radius, within which solid-body rotation occurs.
In the system of the Earth, the planet engages in solid body rotation whilst the gravitational field spirals inwards. A zero-slip condition at the surface gives us the inertial framework we are familiar with and easily explains the Michelson-Morley results if we allow that the propagation of light is not through empty ‘space’ but through the gravitational field itself.
Any equatorial bulge is determined by the dynamics of the vortex system as a whole.
E = mc2
By now, this equation can simply be treated as a joke!
There is no physical definition of ‘energy’ and no direct way of measuring it, merely inferences made from an as yet unproven and undefined theory. There is only a circular definition of mass and again, no consistent method of measuring it (The gravitational ‘constant’). The E in the equation does not mean what most people think and is something called Einstein’s ‘rest energy’; the ‘m’ here is similarly a ‘rest mass’. These are novel, imaginary quantities arising as artefacts of the theoretical framework.
These are quantities derived from a theory which is rooted in:
Considerations of imaginary experiments whose outcomes were invented
An arbitrary decision to set the speed of light to constant with insufficient experimental evidence
Goal-oriented attempts to eliminate any physical differences between acceleration and gravity
A failure to define acceleration, gravity or mass
In popular imagination, the energy described in this equation is real energy that is somehow bound up in the structure of an atom and can be somehow harnessed for the purposes of atomic energy or bombs. However, note that none of the foundational elements of the theory or any of the equations have anything at all do do with the structure of an atom. How then can the theory say anything at all about the energy contained in such an object?
Practical examples of E = mc2
Wikipedia gives some ‘practical examples’ in support of the mass-energy equivalence:
A spring acquires extra mass when it is compressed
A weight acquires extra mass when heated
A spinning ball has greater mass than when it is not spinning
We should expect, given the iconic status of the equation, that they have done due diligence, checked the sources and provided good references to support their claims.
The language used suggests that these experiments have actually been performed and the results measured, however, no citations are given and a quick AI search can find no actual experimental results in support of a single one of these claims!
In addition, the same article contains the following statement:
The “gadget”-style bomb used in the Trinity test and the bombing of Nagasaki had an explosive yield equivalent to 21 kt of TNT. About 1 kg of the approximately 6.15 kg of plutonium in each of these bombs fissioned into lighter elements totaling almost exactly one gram less, after cooling. The electromagnetic radiation and kinetic energy released in this explosion carried the missing gram of mass. – Wikipedia
The language suggests that they actually performed the experiment, that they actually measured the mass and energy of the end results of an atomic bomb explosion!
Accurate measurements of such quantities are clearly impossible. The reference supplied gives an estimated ‘yield’ of 21 kt, but to within an accuracy of 10% only! (Malik) This is not the impression given by the Wikipedia article. To cite this experiment as evidence of the mass-energy equivalence is wholly dishonest.
We still have no experimental evidence for the famous equation.
The constancy of the speed of light
Albert Einstein postulated that the speed of light c with respect to any inertial frame of reference is a constant and is independent of the motion of the light source. – Wikipedia
.. and..
The speed of light is the same for all observers, no matter their relative velocity. It is the upper limit for the speed at which information, matter, or energy can travel through space. – Wikipedia
These both seem like massive overreach given the experimental evidence or lack thereof.
Alternative hypotheses should be sought.
Alternative hypothesis: The ideas described as the Inertial Field Theory (Gravity as an inertial field) are correct and should be explored as possible explanations for the various effects purporting to support Einstein’s proposal.
This theory proposes that gravity is an accelerating moving inertial field which adopts a vortex structure in space and centres upon the Earth. Both matter and light move within this field even in a vacuum and the movements of both are affected by local field conditions. In the case of matter, the field imbues objects with both inertia and gravitational mass, and in the case of light, the speed and direction are very possibly altered.
Laboratory conditions: This field rotates along with our planet and thus there exists a thin layer at the surface of the Earth where a stable field condition provides the laboratory conditions that we are familiar with and within which almost all experiments are performed. The field is roughly isotropic as far as inertia is concerned and ‘accelerates’ towards the Earth to provide gravity. If a beam of light travels the same speed in all directions within any laboratory, then this is not surprising. The light uses the gravitational field as a ‘carrier medium’ and will inherit the velocity of such a field. This is the Michelson-Morley experiment.
The solar system: The stars are said to move according to the precession of the Earth’s axis, but the planets are not seen to do the same, which implies that the whole of the solar system is rotating and tilting along with the Earth’s axis. This is consistent with the notion that the solar system is the centre of a giant cosmic vortex and is undergoing ‘solid body’ rotation similar to that of the centre of barred galaxies (see image below).
The gravity of the solar system therefore forms its own ‘inertial frame’ (literally now) and all movement of matter and light will be in relation to this roughly isotropic field.
Deep space: A free falling laboratory in deep space is not moving relative to any gravitational field, being dragged along by it, and so we expect the speed of light to be constant in all directions.
Gravitational lensing: Light is said to bend around massive objects and this surely implies some sort of interaction between light and a gravitational field. There is therefore some physical process at work as a result of this interaction and it is this which needs a thorough investigation. Simply saying ‘the light is bending because space is curved‘ is again avoiding the question and discouraging further inquiry. Light has a physical ‘nature’ and so does gravity and to investigate these is the duty of the physicist.
No surprise: In all the cases above, we expect light to travel the same speed in each direction, but not for the reasons stated by Einstein but for other, more prosaic considerations, which are specific to the local conditions and arise from some, as yet, unspecified laws of physics that control the interaction between light and gravity.
Geo-stationary orbit: This is more interesting. A geostationary space station is moving at speed transversely to the radial field lines of the gravitational field but is stationary with respect to the radius and thus is subject to an inward accelerating flux of such a field. What do we expect light to do in this situation? Will we see the same speed in each direction? Has anybody measured this?
According to Einstein, the speed of light will be the same again.. because he has declared it to be so! However, the mechanics of the situation are different here and so why should we not expect a different outcome? This does not seem unreasonable.
Summary
This is obviously a real mess, with the whole theory having flawed foundations, undefined terms and insufficient empirical evidence to support the claims. In particular the idea of an ‘inertial frame of reference’ is ambiguous at the very least. This is unforgivable since inertial frames of reference lie at the very heart of the theoretical framework and without them there is simply no theory.
Einstein failed to show that gravity is equivalent to acceleration and failed to justify the constancy of the speed of light in any meaningful way.
We have:
No properly defined coordinate system
Velocity and acceleration are therefore undefined
‘Mass’ is ultimately undefined
No new physics
No mechanisms described
Ambiguous terms
Definition creep
Conclusions drawn from ‘thought experiments’
In addition, if we look for empirical evidence we find:
Exaggerated claims made from little evidence
Too much weight placed upon Michelson-Morley experiment
Failure to consider alternative solutions
Failure to explain the precession of Mercury
Failure to explain or even define rotary motion (Newton’s bucket)
Conclusion: Gravitational fields exist and act via a specific mechanism but the central idea of Einstein is to explain away the effects of gravity by rephrasing it as simply ‘acceleration’, thereby removing any need to describe the mechanism.
The other idea, to simply declare the speed of light to be constant, similarly circumvents the need to describe any physical process by which this might happen. No new physics has been proposed, merely some arbitrary restrictions on how we may interpret measurements.
These are fundamentally flawed ideas and hence the theory can never, ever, amount to anything useful.
Abstract: Energy and matter are transferred from the sun to Earth in brief dynamic bursts (magnetic flux events) via electromagnetic filaments, also known as magnetic flux ropes. The ropes initially form at the Earth’s equator and then travel towards the winter pole. These events seem likely candidates for the original causes of regulatory disorders such as influenza and may explain the simultaneity of outbreaks along lines of latitude. Flux ropes that are visible through the movement of air may be what is represented in much ancient art but misinterpreted as depictions of meteorites. Exceptionally powerful flux events may be the cause of ancient craters on the moon, Mars, Earth and other planets and are still detectable today in a weakened form. Flux ropes likely take the form of Birkeland currents.
Below: An artist’s depiction of a magnetic flux rope impacting the Earth.
A flux transfer event (FTE) occurs when a magnetic portal opens in the Earth’s magnetosphere through which high-energy particles flow from the Sun. This connection, while previously thought to be permanent, has been found to be brief and very dynamic. The European Space Agency’s four Cluster spacecraft and NASA’s five THEMIS probes have flown through and surrounded these FTEs, measuring their dimensions and identifying the particles that are transferred between the magnetic fields.
According to NASA, Earth’s magnetosphere and the Sun’s magnetic field are constantly pressed against one another on the dayside of Earth. Approximately every eight minutes, these fields briefly merge, forming a temporary “portal” between the Earth and the Sun through which high-energy particles such as solar wind can flow. The portal takes the shape of a magnetic cylinder about the width of Earth. Current observations place the portal at up to 4 times the size of Earth – Wikipedia
Flux events as the cause of moon craters
The moon, Mars, Venus and to some extent the Earth, are covered with craters which could well have been caused by magnetic flux events. The craters are said to have been caused by the impact of comets, but several factors mitigate against this:
Arguments that a lack of atmosphere on the moon allows free passage of comets fail when applied to the Earth
All these craters seem to be of a similar depth regardless of their radius and assumed impact mass
All the craters on the moon, at least, are circular in shape and never elliptical. This suggests that all impact is perpendicular to the surface of the body. This is too much to be a coincidence and needs some explanation
The moon craters show a general spiralling aspect (subjective) and pole clustering unexpected from the impact of random chunks of rock
Are these craters instead the result of powerful magnetic flux ropes which have attached to the moon temporarily and melted the rock into a circle? A magnetic tornado has accumulated debris at its centre and deposited it in a small mound which is easily visible in many pictures. The circular aspect is caused by the tendency of the tornado to ‘ground’ in a least energy pathway or maybe for it to be directly guided by the moon’s gravitational field.
Persistence of ancient rope connections?
Contemporary maps of the moon’s gravitational field show clear anomalies that coincide precisely with existing craters.
Not the preponderance of anomalies on the visible side of the moon (the side facing the Earth).
The technique used was to have two satellites orbit the moon and to form a radio wave connection between them. As the satellites moved around the moon, interference patterns in the radio waves suggested that the distance between the satellites had changed by a very small amount and this was attributed to small changes in the gravitational field of the moon. The scientists speculate that these gravitational discontinuities are caused by the surface irregularities of the moon.
Now if we have crater-like shapes on the moon and are measuring the same irregularities out in orbit then it seems inevitable that we would measure the same irregularities if we were in a higher or lower orbit. To rephrase this, there exists a gravitational(?) tube or filament extending from the moon craters to the satellite and beyond.
The images are crisp which implies a sharp edge to the filament. Newtonian gravity is radiative and dissipative; it is inconceivable that it should produce such artefacts. The filaments are sharp and hold their shape and this suggests some centripetal ‘field movement’ typical of electromagnetic fields.
The notion that these are gravity tubes originating in the physical matter of the moon makes it credible that they should simply dissipate into space at larger orbits, but the formulation as electromagnetic filaments allows the possibility that they hold their shape all the way to some other destination. Now since almost all of these anomalies are on the side of the moon that permanently faces Earth, the logical conclusion is that this is where their true origin lies.
A plausible hypothesis
The gravitational filaments in moon orbit are really some sort of electromagnetic current produced by discharge from the Earth, either directly from the surface or from somewhere in the ionosphere. The moon craters were formed millennia ago from the discharge and although the intensity has abated, the connection has persisted and the filaments persist to this day.
The permanent connection between the Earth and moon has affected the rotation of the moon and entrained it to the position of the Earth, giving the rotation we see today whereby the same hemisphere of the moon is permanently facing the our planet.
The discharge of possibly vast amounts of energy from Earth to moon acts as a stabilising influence on our planet’s ionosphere and makes life here less hazardous.
The filaments are permanently tethered to the Earth but must somehow cope with the Earth’s rotation. One possibility is that they ‘drag’ along lines of latitude as the Earth spins and that this causes synchronous effects in the health of the population (see below). The dragging may be continuous or may pause for significant periods at some locations more than others; after all, certain locations on the moon seem able to fix the filaments permanently.
A passing over of a filament or a sudden discharge event may destabilise the human regulatory system causing measurable effects in the health of the population.
If there also exist filaments from the sun to the Earth, then these no doubt will interact with the Earth-moon filaments, causing more disturbances and possibly adding a 29.5 day periodic component to any effects on health or biometrics.
A flux event as the cause of Earthly craters
Many similar craters appear on Earth, with one example, the Eye of the Sahara (Richat structure), shown below. This is claimed to be an eroded bubble in the Earth’s mantle, but could it just as easily have been caused by magnetic flux ropes? Note the concentric circles and alternating directions of the breaks in the rocks.
Flux ropes as Birkeland currents
If a flux rope has come all the way from the sun or makes it to the moon without significant attenuation then there must be some ‘cohesive’ forces that are holding all this energy together and preventing it from dissipating into the cosmos. This observation taken together with the concentric circles shown in once molten rock above, suggests that the flux ropes adopt the form of the alternating coaxial currents referred to by cosmologists as a Birkeland current (See below).
The phenomenon is electromagnetic in nature and arises naturally from the laws of electromagnetism. (Wallace Thornhill)
Depictions of some sort of serpent abound on the planet in various forms, here the Great Serpent Mound in Ohio.
The head of the snake is often aligned with the spring equinox and is accepted by many as representing the Sun. Some have interpreted the sculpture as a whole as representing either a solar flare or a stream of meteorites originating at the sun and causing some sort of catastrophe on Earth.
Another interpretation is that what is depicted here is not a violent stream of flaming rocks, but a sinuous plasma tornado initiated by a magnetic flux rope originating from the sun. The serpent may be a mighty tornado or it may be a routine electromagnetic effect that passes by unnoticed.
Elliptical craters
All the craters on the moon are circular but there exist elliptical craters in the Carolina Bays and other places. How are these formed?
An obvious guess is that they are formed by magnetic flux ropes which for some reason hit the surface at an angle. Why? Is this caused by latitude? The season? Some sort of geo-magnetic anomaly in the past that mis-guided the ropes?
They all look the same proportions and so it looks like they were all caused by the same phenomenon and possibly all at the same time. They appear to be lacking a central mound.
A seasonal indicator?
Computer simulations suggest that the behaviour of solar flux ropes is different according to season and may therefore act as an indicator of season. This may be of some relevance to seasonal disease.
“According to NASA, since Cluster and THEMIS have directly sampled FTEs, scientists can simulate FTEs on computers to predict how they might behave. Jimmy Raeder of the University of New Hampshire told his colleagues simulations show that the cylindrical portals tend to form above Earth’s equator and then roll over Earth’s winter pole. In December, FTEs roll over the North Pole; in July they roll over the South Pole.” – Wikipedia
“I think there are two varieties of FTEs: active and passive.” (David Sibeck) Active FTEs are magnetic cylinders that allow particles to flow through rather easily; they are important conduits of energy for Earth’s magnetosphere. Passive FTEs are magnetic cylinders that offer more resistance; their internal structure does not admit such an easy flow of particles and fields. (For experts: Active FTEs form at equatorial latitudes when the IMF tips south; passive FTEs form at higher latitudes when the IMF tips north.)
So local electro-magnetic conditions are profoundly different in summer and winter, with the winter months transferring more energetic particles from the sun to Earth than the summer months.
What happens of you are standing underneath a sudden influx of energy from the sun?
A cause of disease?
The post: Influenza and field vortices speculated that influenza outbreaks are caused by electromagnetic field vortices in the atmosphere and the post: Influenza and weather found many correlations between weather events and outbreaks but only during winter months.
The regulatory system is electromagnetic in nature
Outbreaks form along lines of latitude
Outbreaks are correlated to changes in the weather
Changes in the weather are accompanied by changes in the local electromagnetic conditions
Outbreaks are seasonal but not related to temperature
The northern flu season starts at the equator and moves up the latitudes towards the north pole in winter (Hope-Simpson)
The southern flu season moves the other way
The correlations with weather events are very precise, but only in the correct season
The question that remains is then: “What is it that defines the season?”. We need to find some reason why the electromagnetic conditions are reliably different in winter, some explanation as to why the ‘season’ moves from south to north each year and a reason why the season does not coincide precisely with the calendar year.
The behaviour of the flux ropes gives us some basis for an explanation although the description is far from complete.
We now have a theoretical model (computer simulation), at least, which:
Is electromagnetic in nature
Is a highly localised phenomenon
Is seasonal in nature
Produces different local electro-magnetic conditions in summer and winter
Has some possibility of ‘movement’ from south to north
Could possibly travel along lines of latitude
Is likely linked to solar activity (sunspots etc.)
Latitudinal synchrony
The chart below shows the number of cases of flu measured over a period of 6 years in Cirencester (UK) and Prague.
The pattern matching is striking, with precisely matching timings and similar magnitudes. The same cause is present in both cases in towns which are 800 miles apart but at similar latitudes.
Hypothesis: Magnetic flux ropes are the cause of weather disturbances that lead to influenza outbreaks in these towns. If a flux rope can survive for more than a few hours only, then it traces out a line of disturbed magnetic activity on the ground as the Earth turns beneath it.
Think about what happens as the Earth turns. A rope from the sun cannot remain fixed at a point on the surface as this would mean looping around the planet at night time, which seems unlikely. The footprint of the rope must remain approximately stationary with respect to the sun whilst the Earth turns beneath it.
The flux ropes are described as moving towards the pole in the winter and it is interesting to note that although Prague is at a latitude of 50°, Cirencester is nearly two degrees north of this, at 51.7°. The rope may be spiralling slowly northwards.
Latitudinal gradient
Grenfell et al. in the chart below found a strong correlation between influenza ‘season’ and latitude, with outbreaks occurring later with the more northern latitudes. (The start of July is week zero I think)
This is an important discovery as the usual assumption is that outbreaks are somehow coincident with colder weather. This cannot be true as it would imply a general southerly progression from the north pole downwards.
How to explain?
If we think that influenza is caused by some atmospheric disturbance of electromagnetism, the in the first instance we are looking for some such disturbance that moves from south to north as the northern winter season progresses.
But we already have, from the Wikipedia article: “(Computer) simulations show that the cylindrical portals tend to form above Earth’s equator and then roll over Earth’s winter pole. In December, FTEs roll over the North Pole; in July they roll over the South Pole.”
This looks promising, but it isn’t too clear what is happening here. How long does the rolling take? What happens in October? Does it still roll all the way to the North Pole or does it pause half way?
This lack of clarity doesn’t really help our hypothesis but neither does it contradict it, so flux transfer events are still plausible candidates for the initiation of seasonal outbreaks of influenza.
Ancient petroglyphs and plasma discharge
Much ancient art seems to depict shapes similar to plasma discharge produced in a laboratory (Anthony Peratt) thereby suggesting that our atmosphere may have been considerably more electrically active in the past than it is today.
Crop circles
Crop circles are found mostly on Salisbury plain and have been theorised to have been caused by plasma whirlwinds (Wikipedia)
In Cilycym, Wales a farmer claims to have witnessed such a creation:
Mr William Cyril Williams wrote: “With reference to the corn circles mystery I actually witnessed one being made. I was standing in a cornfield one morning and saw a whirlwind touching the ground and forming a circle in the corn. It was just the strength of the wind in the whirlwind that formed the circle”.
The event happened in the late 1940’s when he worked on his father’s farm, Penfedw Farm at Cilycwm.
The rock art at Newgrange clearly depicts some sort of vortex wind flowing over some water:
Plausible hypotheses
Magnetic flux ropes in the past were more abundant and more visible. Even if they could not be seen directly, they could be ‘felt’ or inferred by circles in the grass for example.
Stonehenge, Avebury, Cilycwm, Newgrange and half of Salisbury plain are all at around 51° N, the same latitude as both Prague and Cirencester. This may or may not be of significance.
The flux ropes took up either permanent or seasonal residence at certain sacred places and the locals decided to make use of the intermittent (seasonal) energies associated with them.
Geoffrey Drum (Land of Chem) argues that these places were constructed as fertiliser factories, harnessing energy to facilitate bio-chemical reactions. Very possibly the astronomical orientation of the buildings helped to keep track of the season in order to optimise energy usage.
Smaller constructions are to be found all over the the world and are assumed to be tombs for the local dignitaries. An alternative idea might be that that are attempted healing chambers; the energy has been sensed or even seen by the locals and they have placed their sick there in the hope of rejuvenation.
The ‘gravitational constant’ is measured via the Cavendish apparatus shown right. Some metal balls are mounted on a torsion balance and attempts to measure the attractive force between them are made. Calculations are performed and the result is the value of the Newtonian gravitational constant.
Many questions arise:
What is really being measured here?
What is being calculated?
Why is it not constant over time?
What do gravity and mass have to do with this?
Why do even simple equations seem confusing and self-referential?
What is mass anyhow?
Problems in general
The notion of gravity is usually thought of as connected to one or more of:
Objects falling to Earth
Objects having ‘weight’
Planets orbiting stars etc.
These are all thought of as involving the same phenomenon (gravity) but they are all different processes with potentially different causes and it is wrong to lump them all together prior to actual proof that they all have the same root cause.
Note that the Cavendish experiment involves none of these mechanisms:
Nothing falls to Earth
The weight of the balls is irrelevant
Nothing orbits in free-fall
Moreover, the experiment takes place within the gravitational field of the Earth and relies upon it for correct functioning but the nature of the field itself is not thought relevant to the results. The apparatus is sometimes shielded from magnetic fields via a Faraday cage, but the Earth’s gravity cannot be blocked out.
The gravitational attraction between the weights is very small compared with the Earth’s attraction, and this in turn actually varies at different places on the planet.
This is bad technique. Imagine you want to place two ping pong balls on the surface of a still pond to see if they are attracted by surface tension. You don’t have a still pond and so you try using a slow flowing stream and just subtract off the speed of the flow to compensate. Effects from the water flow in and around the balls simply outweighs the effect you are trying to measure.
More problems arise from the theoretical model used to make the calculations.
What is ‘mass’?
Newtonian physics actually defines three types of mass which all have their action via different mechanisms but are all assumed to have the same numerical value:
Inertial mass: This is the resistance of an object to being moved
Active gravitational mass: This is the mass which is assumed to create a gravitational field
Passive gravitational mass: This is the mass that responds to a gravitational field by accelerating downwards
We can add a fourth type of mass for ‘clarity’:
Weight mass: This is the mass that depresses the mechanism of a scales to give a reading for the weight of the object within a specific gravitational field
The weight mass is generally treated as being synonymous with the passive gravitational mass but:
All objects fall to the Earth with the same acceleration which means that the passive gravitational mass is effectively unmeasurable: Gravity debunked
An object placed upon scales is not accelerating towards the planet and so the passive gravitational mass may not be said to be involved
If there is no acceleration then the inertial mass cannot be said to be involved. Therefore, we need a new type of mass
All these masses are treated as if they are the same thing and are invariably referred to as ‘mass’. This linguistic trick allows continual switching between the masses without the need for justification and without the reader noticing what is happening.
Until the masses are proven to be equivalent, this is a dishonest practice.
Mass as an ‘intrinsic property’ of matter
Mass is often described as ‘an intrinsic property of matter’. This is highly deceptive.
This description lends weight to the idea of mass (4 types) as somehow ‘invariant’ or maybe ‘constant’. This in turn makes the idea of a gravitational constant seem more likely.
The assertion is that both ‘mass’ and big G are fundamental properties of Nature, that their values are constant and that, as a consequence, there is something there to be measured. The big problem is that the measurement of these quantities varies in both time and space.
This suggests to the unbiased mind that the underlying quantities are varying, but the scientists will have none of it and insist that the problem is with the measurement techniques. The language used implicitly suggests that there is something invariant to be measured and that if any measurement is contrary to this then it is the measurement that is at fault.
This is not science. We require that theory emerges from measurement and that verification of the theory proceed from further measurements. What we have, however is that a theory has been proposed and that measurements do not agree and so they are explained away by measurement ‘error’.
Measurements are not errors, they are the foundations of science.
A fundamental constant of Nature?
An AI engine tells me:
The universal gravitational constant ((G)) does not vary; its value is a fundamental constant of nature. However, the measurements of (G) show significant variation, a result of experimental uncertainty and the immense difficulty in accurately measuring the constant due to gravity’s weakness.
No. Absolutely not true.
All we have is measurements. A theory is not automatically ‘true’ by itself but relies upon measurements.
The assertion is that something called a ‘Fundamental Constant of Nature’ exists but is somehow unmeasurable! If we can’t measure it then how do we know that it exists in any sense at all? If attempts to measure it give different results then how can we say that it is a constant? Where is the evidence for this?
Big G is not a fundamental of nature and is not even a fundamental of the measurement system as it cannot be measured directly and must be calculated from other ‘measurables’. It may be regarded as a fundamental of Newtonian theory, but if this is at odds with Reality then what is the point of the theory?
The idea that there is ‘something out there to be measured‘ lends a strong bias to scientific thought, but this really is an illusion. The reality is that all we have is a set of measurements and everything else is merely an interpretation of those measurements.
Now scientists have found that the measured G values oscillate over time like a sine wave with a period of 5.9 years. It’s not G itself that is varying by this much, they propose, but more likely something else is affecting the measurements.
Also:
Once a surprising 5.9-year periodicity is taken into account, most laboratory measurements of G are consistent
Or: “Once the variations are adjusted for, the value appears constant”. Well, yes, I guess so .. but why the variation in the first place?
Scientists have started to think that variations in gravity may somehow be connected to ‘inertia’.
It’s not G itself that is varying by this much, they propose, but more likely something else is affecting the measurements.
As a clue to what this “something else” is, the scientists note that the 5.9-year oscillatory period of the measured G values correlates almost perfectly with the 5.9-year oscillatory period of Earth’s rotation rate, as determined by recent Length of Day (LOD) measurements. Although the scientists do not claim to know what causes the G/LOD correlation, they cautiously suggest that the “least unlikely” explanation may involve circulating currents in the Earth’s core. The changing currents may modify Earth’s rotational inertia, affecting LOD, and be accompanied by density variations, affecting G. – Zyga
As the data shows, the length of each day varies slightly, with some days slightly longer and some days slightly shorter than others. The LOD variation is a measure of the speed of Earth’s rotation, and the scientists in the current study found that its periodic oscillation aligns almost exactly with the G oscillations. – Holme et al.
The equation for big G
The value of the gravitational constant is calculated by a version of the Cavendish experiment which consists of metal spheres on rotating frames. As the spheres become closer to each other a small force of attraction is detected between them and the gravitational constant is calculated from the resulting force.
The formula for the force between two masses is given as:
Where:
is the gravitational constant is the force between the masses and are the masses is the distance between the masses
We can rearrange to get G in terms of Force and mass:
Note that the type of mass to be used is never specified, thereby ‘forcing’ on us the unproven assertion that they are all interchangeable.
Within this framework, mass is not absolute and not an innate property of matter, but a result of interaction between matter and the inertial field. One consequence of this is that if there is a change in the nature of the inertial field then there will be a change in mass and consequently a change in the calculated value for G.
This way of looking at things makes it clear that the different types of mass arise not from different ‘properties’ of matter but from the different forms of interaction with the inertial field:
Passive gravitational mass: This is unmeasurable and therefore does not exist for practical purposes. Downward acceleration is caused by the acceleration of an inertial field and is independent of anything called ‘mass’.
Active gravitational mass: This is an oversimplified representation of the total inertial field associated with an object. It has an attractive (accelerative) component but also a static inertial component.
Inertial mass: This is the result of an object’s interaction with the ambient inertial field and arises from attempts to accelerate an object relative to the field. This perceived mass will change if the inertial field changes. In all laboratory experiments on Earth, the ambient inertial field is the Earth’s gravitational field. This means that if the Earth’s gravitational field were to change in a particular way, then the measured inertial mass of all objects at the Earth’s surface will also change.
Weight mass: Weight arises from attempts to move an object upwards, against the downward acceleration of the inertial field. The measured weight is a combination of the magnitude of the downward acceleration of the field and the inertial resistance to any attempt to move objects against such acceleration. For a fixed object, the measured weight will vary according to either changes in the downward acceleration or changes in the inertial resistance of the field.
Note that according to the Newtonian formulation, ‘weight’ and ‘inertia’ are different phenomena, but in the context of an inertial field, they are actually identical once the acceleration has been factored out.
To see this, imagine trying to establish the weight of something in a freely falling lift. The object is weightless until an ‘upward’ acceleration is applied to the scales whereupon the ‘weight’ of the object is immediately apparent. But this ‘weight’ is now synonymous with what might be called ‘inertial resistance’. In other words, the inertial mass is identical to the weight mass.
The equivalence principle
“The equivalence principle is the hypothesis that the observed equivalence of gravitational and inertial mass is a consequence of nature.” – Wikipedia
It is necessary in Newtonian theory to make this hypothesis. The two types of masses are both said to be innate properties of matter and there is no way of measuring the gravitational mass. Once this hypothesis is made, the rest of the theory can proceed.
With the Inertial Field Theory we can say that there is no need for a passive gravitational mass but we can say that the behaviour of a weight on a spring is identical to that of a weight undergoing acceleration against the inertial field. This is an equivalence that arises as a direct theoretical consequence and needs no additional hypothesis. To rephrase, here the principle derives from the theory and not the theory from the principle.
“... in a gravitational field the acceleration of a test particle is independent of its properties, including its rest mass.” – Wikipedia
The illusion of mass arises from the interaction between matter and an inertial field. There is no such measurable quantity as ‘rest mass’, which is why it plays no part in acceleration. Moreover, if the field itself can be said to be ‘accelerating’, then the test particle is, ipso facto, at rest with respect to the field.
“… in a uniform gravitational field all objects, regardless of their composition, fall with precisely the same acceleration.” – Wikipedia
If the field is visualised as accelerating then all objects maintain their position with respect to the field.
Obtaining the measurements for Big G
The equations for calculating the gravitational constant are not complicated, but they are confusing, largely because they are highly ambiguous with regards to the type of mass to be calculated.
The equation for big G is:
But which masses are we to use?
Passive gravitational mass: This is unmeasurable, irrelevant and non-existent for practical purposes.
Active gravitational mass: This can be calculated from the results of a Cavendish Experiment but only once we have ascertained the value for G! To see this, try using two identical masses and rearranging the above equation to get:
So to calculate the value of G we need to calculate a gravitational mass but in order to get a value for this we need to already know Big G!
This is clear nonsense, but texts on the subject just ignore it. They will just say that we need a value for mass and proceed to calculate it some other way. The Equivalence Principle is invoked in order to use a different fundamental quantity as the ‘m’ in the equation for the gravitational constant.
Mass as weight: An object is weighed on Earth and the value is divided by the measured gravitational acceleration at that location on Earth and this is used as a definition of ‘mass’. The acceleration varies over the globe and this method in any case introduces a new variable into the calculations. We are trying to measure the attractive force between two masses: why bring the Earth’s gravity into the equation?
Inertial mass: The inertial mass can be calculated without knowledge of G or the Earth’s gravitational acceleration. An attempt is made to move the mass by applying a known force and the acceleration is measured as a result. The mass is then calculated from Newton’s F=ma as:
Calculating Big G
Congratulations on getting this far. The point is to write down a formula for G using only those quantities that have been explicitly measured. Text books and science websites will use ‘m’ for mass all over the place to give the impression that the calculations have something to do with ‘mass’ but as we have seen above, this quantity is never directly measurable.
Substituting for mass in the definition of Big G we get:
Where:
is the force measured by the Cavendish balance is the distance between the Cavendish weights are the forces applied to the weights to measure inertia are the resulting accelerations
So what did we actually measure?
We pulled some weights around with springs and then put them in a Cavendish torsion balance. We the applied a magic formula to calculate a value G which we then declared to be fundamental and constant and something to do with gravity.
All experiments were performed close to the surface of the Earth and so within the variable gravitational (inertial) field of the planet.
The actual ‘measurables’ of the system were:
Acceleration, i.e. distance and time
Force, i.e. the deformation of a mechanical spring or torsion of a wire
Everything else is conjecture.
If everything is calculated from the measurables, then these measurables may be said to be the fundamentals of the theoretical system and other quantities such as mass or gravitational constants are best described as ‘derived quantities’ of the theory. If the theory changes then so do the values of the derived variables, but not the measurements themselves. The measurements are immutable, not the derived values.
So what did we prove?
Goodness knows!
We most certainly did not prove that the value of G is constant, as experiments showed that it varied according to a 5.9 year cycle.
Statements along the lines of “The constant is still constant but we need to measure it better” are just wishful thinking. Where is the proof of constancy to be found if not in the measurements?
The calculated value of G is some sort of summary of the state of the inertial field at the Earth’s surface and how it affects the movement of objects. It probably has little to do with any innate property of matter.
The idea that this calculated and variable value represents some fundamental constant of nature is surely misplaced,
Why does the constant vary?
It varies because some of the inputs to the equation vary. Either the force between the spheres in the Cavendish apparatus changed (easily verifiable surely?) or the inertial masses of the spheres have changed.
The most likely explanation seems the latter. Inertia, according to the Inertial Field Theory (IFT), is partly a function of the inertial component of the gravitational field and it is this that appears to change on a 5.9 year cycle. In Newtonian theory, inertia is (as with almost everything else), an innate property of matter and therefore cannot change, but inertia, in the IFT is down to interaction between the weight and the Earths gravitational field.
The gravitational field of the Earth is an extension of the matter which comprises the Earth (Gravity as an inertial field) and hence rotates with it. It follows therefore that variations in day length should be associated with variations in inertia and hence also with a variable gravitational constant.
The variations in the rotational speed of our planet arise from naturel variations in the enclosing vortex structure and probably defy any attempts at theoretical prediction. Just observe a vortex in a river and note that it is fundamentally stable but with slight perturbations in speed and position as it absorbs energy from the surrounding flow. The Earths gravitational field is very likely similar to this. Perturbations may seem cyclic and at a 5.9 year cycle but this may very well change in even the near future.
Criticisms of the Cavendish experiment
Criticisms have already been made but it cannot do any harm to collect them together as a summary of the situation.
The Cavendish apparatus is intended to measure the strength of gravitational attraction between two metal spheres. The effect is expected to be small relative to the effects from the Earth’s gravitational field, but all experiments are performed within this field with the hope that the experimental results are somehow unaffected by the ambient conditions. The ambient conditions are known to be variable.
In addition to this the whole of Newtonian theory adopts a confused attitude to inertia, gravity and mass. Inertia and gravitational attraction are said to be two different phenomena and to both derive from their own innate property of matter, i.e. that of ‘mass’. The two masses are held to be different properties to start with and to achieve their effects by presumably different, although unspecified, mechanisms. However, they are also assumed to have the same numerical value as each other and are inevitably treated as if they were precisely the same thing.
The problem, though, is that these ideas when taken as a whole are not supported by experimental results. We should get consistent and constant measurements if mass is an intrinsic property of matter, but we don’t.
The experiment tries to measure a weak gravitational effect within a larger and more variable gravitational field. It plays mix ‘n’ match with different types of mass with no theoretical justification and it gets the wrong answer.
Videos of the development of the brain and nervous system of the zebra fish imply the action of centripetal forces typical of vortex flow and the delivery of energy via electromagnetic ring vortices. Both phenomena are consistent with the existence of an electromagnetic bio-field as described here: The nature of the bio-field
The morphogenesis of the brain and nervous system is described in great detail in various papers, but this post ignores specific details of the construction and simplifies for clarity by concentrating only on the flow of energy needed to achieve such a complex feat.
The following are assumed:
That energy is conserved i.e. neither created nor destroyed
The laws of electrodynamics hold
The existence of electrodynamic vortex structures as a necessary consequence of such laws
Development of the brain
Pause and think before watching the video of the genesis of the brain of a zebrafish. Try to predict what you will see. Expected is that maybe a small brain stem acts as a ‘seed’ which blossoms outwards similar to the growth of a cauliflower or a cumulus cloud; development is bottom-up as the large scale brain is created from small scale cells.
The surprise is that nothing of the sort happens and instead what we appear to see is that a nebulous mass of particles suddenly self-organises into a brain-like shape, with an overall global organisation actually preceding the development of the finer details.
The brain seemingly does not arise from the organisation of cells, but the cells arise from the way the brain organises energy and matter.
Centripetal organisation
The first thing we see is that ‘matter’ starts to move towards the centre of the brain in a double vortex pattern. Now simple molecules cannot decide to do this by themselves and so some overall organisational principle is in play. Moreover, any movement of matter necessitates the presence of some sort of force. The movement is centripetal and so the forces are centripetal.
The solution for this is an electromagnetic double vortex acting as a morphogenic field. The vortices form within the skull cavity and matter is drawn inwards in a similar manner to that of a tornado. The difference here is that the vortices are electromagnetic field vortices instead of moving wind vortices. An electromagnetic gradient is created between the circumference of the vortex and its centre whereupon particles are drawn inwards according to their charge structure.
Top-down organisation
After the initial accumulation of matter towards the centre of each hemisphere, shapes begin to appear and matter is organised into tissue. The mass takes on the appearance of an actual brain.
‘Flow’ in vortex systems is from the outside to the inside and from large scale to small scale. The structure becomes fractal-like, with smaller and smaller vortices appearing at each level and continued centripetal energy flow and organisation.
The idea that the smaller details appear before the global shape of the organ now sounds unlikely. Any manufacturing at the cellular level needs a certain amount of raw material and a reliable flow of energy. Moreover, in such systems, the energy flows from the larger scale to the smaller. It therefore follows that the construction of the smaller scale structures depends upon the prior organisation of a global vortex system. The assembly of the global vortex must therefore precede the manufacture of any of the cells that emerge from it.
Organisation and energy flow are therefore top down according to theory, and the theory is supported by the observations.
Morphogenesis of the nervous system
Another remarkable video shows the development of the nervous system within a zebrafish embryo.
Nerves seem to grow spontaneously with little guidance
A glow near the growing tip indicates superfluous energy here
Freely moving entities (top right) are continually spinning, indicating the presence of vortex fields
The growing tip of each nerve emits light. Light is dissipative energy and therefore not organisational by itself. The emission light therefore serves as a waste disposal mechanism. Energy accumulates at the ends of the nerves, is used to manufacture the next section of tube and any excess is dissipated as light.
The question, then, is how does all this energy arrive at the growing tip? Sveral options come to mind:
It is already there, ready for use
It accumulates in real time around the nerve
It is supplied along the nerve
In the first two cases, a centripetal accumulation of energy is implied. All the energy gathers at the required place, somehow guided by organisational forces. The fact that accumulation is centripetal implies some vortex system is at work.
In the third case, the energy travels along the nerve, presumably originating in the brain. Researchers have found that the speed of nervous conduction depends upon the thickness of the myelin sheath and this in turn suggests that conduction occurs within the myelin sheath. This is consistent with the idea that energy is conducted along nerves in the form of electromagnetic ring vortices, with most of the field movement being magnetic and occurring within the (electrically insulating) myelin sheath with a concomitant movement of electric field along the (electrically conductive) central tissue which is interpreted as the main ‘current’.
The twin vortex system of the brain is still acting as an energy accumulator at this point and supplies the energy required for the formation of the nervous system. Ring vortices travel along existing neurons to their end where sufficient energy accumulates to create or assemble new tissue. Excess energy is dissipated as light into the general vortex system whence it is recycled back inwards to where it will be needed. Ring vortices are easily guided by the ambient electromagnetic field gradients and thus the nervous system takes the form ‘intended’ by the global morphogenetic field.
The face of a frog
In this short clip, and electric field outlines the structure of a tadpole’s face even before the cells have started to differentiate, so confirming that it is the field that is the organisational element and that such organisation is top-down.
The claim that we weigh less at the equator because of centrifugal force is not supported by empirical data. Natural variations of the gravitational field owing to variations in planet density are sufficient to account for the differences in weight. The equatorial bulge is sufficient to account for differences in weight across latitude. The results are consistent with the idea of the Earth creating its own spinning frame of reference relative to which, the planet itself is actually stationary.
The data
I asked an AI engine to give me the values of gravitational acceleration across the globe.
So the variation between poles and equator is the difference between 9.832 and 9.780 which is 0.052 m/s².
I now asked for typical variations across a single latitude
So the difference in gravitational strength between the poles and equator is less than that for the planet as a whole and is equal to the variation across a single latitude. This variation then may not be attributed to a spinning Earth without further evidence.
Equatorial bulge
I now asked the engine to summarise the variation in gravitational field strength according to the bulge of the equator alone.
So to get from the stronger gravity at the poles to the weaker gravity at the equator we take the pole value of 9.832 and multiply by 0.9933 to get the value of 9.766. The difference between these two values is 9.832 minus 9.766 which is 0.066 m/s², that is to say, an even bigger difference than actually measured. There is no need for any additional adjustment to be made here; everything is explained by bulge alone.
Too many variables?
We have a measured variation of 0.05 – 0.07 m/s², across the globe, along lines of latitude and from equator to pole. W have theoretical variations of 0.05 predicted from crustal variations, centrifugal forces and equatorial bulge.
Sometimes a measurement is attributed to crustal variation, sometimes to equatorial bulge and sometimes to centrifugal force, seemingly dependent upon the argument to be made at the time.
This is no way to do science. There are too many variables to be resolved in a few ad hoc experiments and certainly, in the data above, no chance of sensibly interpreting any single measurement or attributing any single cause with any degree of certainty.
Centrifugal force?
The variations in weight at the poles and equator is adequately explained by the bulge of the planet at the equator. There is no need to bring centrifugal force into the equation as there is simply no requirement for it given the data.
If the calculations and measurements above are correct then additional adjustments for centrifugal force will in fact give incorrect results. This suggests that centrifugal forces at the planetary scale are not merely irrelevant but perhaps even non-existent.
A rotating frame of reference
Experiments demonstrating the existence of centrifugal force are all small scale affairs and performed in laboratories within the Earth’s magnetic field, whether at the surface of the Earth or in freefall nearby. The effects seen can therefore be explained by the action of objects moving through an ‘inertial field’ as explained here: Gravity as an inertial field
It is far from obvious, however, that the phenomena of rotation, Coriolis forces and centrifugal forces can simply be transferred from a laboratory to the scale of a planet within a solar system. If scientists claim that they can, then this must be rigorously demonstrated with data and arguments that are somewhat more reliable than the ones presented above.
Attempts to demonstrate the rotation of the Earth by means of a Foucault pendulum are no more rigorous and no more conclusive than those described above: Gravity as an inertial field
If the Earth’s inertial field rotates along with its mass, then there is no centrifugal force to alter the weight of an object at the equator. This is entirely consistent with the above data and even supported by it.
The Bible claims that the sun stopped moving in the sky for a whole day. This is impossible without great catastrophe according to mainstream physics but an explanation from vortex physics allows this to happen in a manner consistent with biblical texts.
“‘O sun, stand still over Gibeon, O moon, over the Valley of Aijalon.’ So the sun stood still, and the moon stopped, till the nation avenged itself on [or, ‘triumphed over’] its enemies… The sun stopped in the middle of the sky and delayed going down about a full day” (Josh10:12,13).
A modern day interpretation is that it is the Earth that has somehow stopped turning, giving the impression of the sun being stationary in the sky for a day. This is not possible according to classical physics and so physicists (including Einstein) have come up with the idea that the poles could one day become top-heavy with ice and the whole planet topples on its side for a bit before either restoring its position or taking up a new position with North and South poles switched around.
Needless to say, such an inversion would create havoc upon Earth, with earthquakes, the splitting if the Earth’s crust, and catastrophic flooding as the Earth stops turning but the oceans continue to rotate at 1000 mph. However, the text describing Joshua’s Long Day makes no mention of flooding, merely the sun standing still. Is this possible?
The assumption of classical physics is that the Earth behaves according to the same physical laws as a gyroscope spinning near the surface of the planet. This is a big assumption, however, as the two situations are completely different. The gyroscope spins within the Earth’s gravitational field, which acts as an inertial frame of reference to give it its characteristic behaviour, whilst the Earth spins with reference to.. what? Space?
The problem is that such a reference frame has never been established, with Newton and Mach never quite resolving their differences and as a result we simply cannot say whether the Earth is a big gyroscope or not.
A description which is consistent with the facts so far is that the Earth is at the centre of a cosmic vortex and turns in accordance with such a vortex. The layer close to the Earth’s surface forms an inertial frame of reference for a spinning gyroscope or Newton’s Bucket, whilst the planet spins in harmony with the vortex as a whole.
The behaviour of a spinning Earth is therefore quite different from the behaviour of a gyroscope. It is quite possible for a glitch in the vortex to dramatically alter the spin of the Earth and evidence for this is that the rotational speed of the planets Earth and Venus, at least, will vary from day to day with no ill effects. If the behaviour of the vortex were to put the Earth on its side for a while, then the whole frame of reference is also displaced and nothing unusual will happen at the surface.
A spinning gyroscope gets its angular momentum from the inertial field but the Earth is not spinning with respect to such a field (it is spinning ‘with’ the field) and therefore has no angular momentum. Newtonian mechanics does not apply here. If the Earth turns on its side, then the gravitational field also turns and everything operating within the new frame of reference. The Earth is not moved by any ‘force’ but instead by a shift in the ‘geometry of space’. Every atom in the Earth is subject to such a shift at the same time and so all particles and energies in the planet, its oceans and its atmosphere, move in concert with each other with no local disturbances such as floods or earthquakes.
Neither Newton nor Mach established a consistent frame of reference for anything at all and so this hypothesis cannot be said to contradict their theories, since there is no ‘basis’ for such theories.
In the illustration below, the process of vascular development is easily interpreted as the progression of vortex templates from single vortices to vascular tube.
Elongated vortices (endothelial cells) develop a North-South dipole and self-organise according to magnetic forces, with North poles attracting South to form a chain.
Energy continues to accumulate leading to fractalisation and sub-vortices which lead to the formation of cytoplasmic vesicles.
These new vesicles again orient along the magnetic field lines and continue to accumulate energy. They grow in size and elongate in accordance with the ambient field conditions.
The vortex principle (centripetal flow) moves the new vesicles towards the centre of the whole chain where they merge to form a cylinder at the inner radius of the tube.
Energy continues to accumulate at this radius and is available for maintenance, repair and possible contributions to blood flow. Evidence for this is that the blood starts to flow before the heart has formed and it must therefore be acquiring energy from somewhere else: The Heart is not a Pump
Gravity is a ‘field of inertia’ that accelerates towards the Earth and forms a frame of reference for the kinetic behaviour of all solid objects. Objects moving with the acceleration are in free-fall and experience equal inertial resistance in all directions, implying that the field is somewhat isotropic in this respect.
The field near the Earth’s surface accelerates towards the Earth and rotates around with it thereby providing a local inertial frame of reference that both accelerates towards the ground and moves with the surface. Dropped objects will fall ‘vertically’ as a consequence; they are moving vertically with respect to the (rotating) gravitational field and hence with the ground.
No appeal can be made to either linear or angular momentum as as fundamentals of this framework – they need to be ‘derived from’ the framework not ‘added to’ it.
The mass of the Earth is stationary within this inertial framework which takes upon the aspect of a cosmic vortex within the larger vortex of the sun’s gravitational field.
If there happen to be perturbations in the vortex field then these are transferred to the Earth and will account for the variations in day length (20 mins a day with Venus!). No ‘force’ is needed here to move an entire planet, merely a modulation of the gravitational field which necessarily influences the whole planet regardless of its ‘mass’ (another Newtonian concept).
The atmosphere of the Earth is not dragged around by friction as some claim but is actually stationary (on average) relative to the inertial field at the surface of the planet. Atmospheric pressure is created by the inward acceleration of the vortex as a whole; the ‘vortex principle’. The gravitational field at the surface of the planet provides a frame of reference which is stationary with respect to the surface and the whole weather system operates within this frame of reference.
The centripetal nature of the vortex accounts for the spherical shape of the sun which shows no no significant equatorial bulge.
Gravitational acceleration of objects is merely the behaviour of such objects that are stationary with respect to the accelerating inertial field. Geo-stationary objects are actually accelerating upwards with respect to the inertial field.
Objects acquire inertia according to local field conditions only and so the rotational speed of the Earth around the sun is irrelevant, as is the speed of the sun through space and the properties of distant galaxies.
The field at the Earth’s surface provides an inertial frame of reference. The water in Newton’s bucket rotates with respect to this frame and the effect needs no further explanation.
The field is electromagnetic in nature and permeates all matter. Matter itself consists of electromagnetic field modulations. Inertia arises from the interaction between the two fields and consists of a sort of ‘field drag’.
Imagine that, as a force is applied to a stationary object, the movement of matter interacts with the gravitational field to produce some sort of eddy currents. This ‘electromagnetic friction’ opposes the movement initially but once the currents are established, they will tend to persist and serve to preserve the constant velocity of the object with respect to the field. This is interpreted as ‘momentum’ in classical physics; it will take another force to slow the object down. Momentum, velocity and kinetic energy are all relative to the local field conditions.
Conversely, if a gravity field accelerates past (through) an object, then electromagnetic eddies are formed and the object is dragged along with the field in a manner somewhat analogous to a river dragging a boat or maybe a sponge, downstream.
The concept of absolute space is not particularly useful in this respect as all free movement is relative to the local gravitational field. Konstantin Meyl goes further and claims, with good reasons, that the local field conditions also determine length, time, the speed of light and even geometry.
Q; What happens outside of a gravitational field? A: There is no such place.
Each planet of the solar system is at the centre of a gravitational vortex, with interaction with neighbouring or enclosing vortices being complex and according to the laws of electrodynamics. We should expect, from the point of view of Newton’s gravity, to see odd relationships between the planetary orbits and to suspect the existence of hidden (‘dark’?) energies influencing heavenly bodies.
Our atmosphere remains in a thin layer around the planet owing to gravitational attraction, but how does it maintain an identical rotational speed and why are there not 1000 mile an hour winds at the equator?
The most common explanation from AI searches and physics forums is that the atmosphere is dragged around by friction with the Earth’s surface. This is not credible and is contradicted by everyday observations and common sense.
Some explanations describe a non-slip condition at the Earth’s surface surmounted by a shear layer rising away from the surface, but weather balloons rise vertically and con-trails can be seen stationary above us for many minutes; there is no shear layer.
Others say that, over the millennia, the whole atmosphere has acquired sufficient angular momentum to spin with the Earth, and will maintain such synchrony in the future. There are many problems with this:
The air does not maintain synchrony with the Earth’s surface. Cyclonic structures are the norm, with the wind travelling both slower and faster than, the spin of the Earth, and from both west to east and east to west. Moreover, we see wind travelling north to south and vice versa. In all these cases, the wind is not moved by friction with the surface, but by the laws of aerodynamics.
The eye of a hurricane moves at relatively slow speeds (10-15 mph) with respect to the surface of the Earth. This speed is determined by the dynamics of the hurricane as a whole and not by local friction between the air and the surface, so the hurricane as a whole is somehow attuned to, or cognizant of, the rotational speed of the ground. We have winds with huge speeds in most of the hurricane with parts blowing with the rotation and others against the rotation. Is it really credible, amongst this mayhem, that friction with the ground somehow stabilises the whole system to move approximately in alignment with the Earth’s rotation? Surface friction is clearly irrelevant to most of the cyclone.
A vast amount of kinetic energy is surely lost in storms and converted to heat but after the storm is over, the wind is seen to be travelling in synchrony with the surface again; there is no need for a millennium of readjustment to take place for this to happen.
The (moderate) wind outside my window has abated to leave a remarkably still garden. I did not see a slow return to normality caused by shear stress. How does the air know what ‘stillness’ is? There appears to be some atmospheric frame of reference to which all air returns whenever it is not being pushed around by other pieces of air. What is this frame of reference?
A few mild gusts and eddies now appear in my garden. The air is being pushed around locally by neighbouring masses of air. I see the air move the trees a bit, but I don’t see the trees moving the air at all. The eddies die down, but not because of friction with the ground. The kinetic energy of the eddies has been dissipated by friction within the airflow itself which, depleted of such energy, has then become motionless relative to some local frame of reference. The air ‘knows’ its place.
The solution
The gravitational field of the Earth forms a roughly isotropic field of inertia at the surface of the planet which acts as a frame of reference for all physical laws and all observable activity.
The field accelerates towards the ground, giving rise to gravitational acceleration, weight and atmospheric pressure. If we factor out the acceleration, then the field gives rise to the same inertial resistance in all directions. The vertical (accelerative) component of the field drops off with the inverse square of the distance, but there is also an inertial component which exists both in the vertical and horizontal directions.
The field rotates with the Earth at all latitudes and so the air moves locally as if there were no rotation, as if the Earth were stationary.
Newton’s bucket
In the case of Newton’s bucket, the water will be dragged around to form a dip in the middle but when the bucket stops rotating, the water will settle down to a level surface. Once again there is a sense of a (local) ‘frame of reference’. A rotating solid object will rotate indefinitely owing to conservation of momentum, but fluids and gases behave differently in an inertial field as inertial drag, having a vortex nature, will promote eddies in the molecules of fluid or gas which will lead to internal friction and eventual stabilisation with respect to the frame of reference.
Coriolis forces
The above hypothesis makes quite a powerful prediction which is that there are no such things as Coriolis forces at the planetary scale.
This idea came both as a surprise and shock whilst writing the article and needs addressing. Scientists are adamant that the behaviour of gases, fluids and solid objects are affected by Coriolis forces that deflect the motion objects from a straight line relative to the surface of the Earth and cause pendulums to swing in a plane relative to the ‘fixed stars’.
We need at least to account for:
The claimed Coriolis forces affecting the weather
The motion of a Foucault pendulum
A ball thrown in a rotating room will appear to follow a curved path because it is really moving in a straight line relative to an inertial frame of reference which seems to follow the rotation of the Earth. However, if such a frame of reference really does rotate with the Earth, then any projectile or stream of air at the surface of the Earth will travel in a straight line where ‘straight’ is, by definition, aligned with the Earth’s rotation.
This is said not to happen, with both streams of air and large pendulums claimed to align, not with the Earth’s rotation but with some other frame of reference, either an ‘absolute’ frame (mechanism not supplied) or with respect to the ‘distant stars’ (mechanism not supplied).
The Earth’s gravitational field seems locally almost identical at each point on the surface, but we cannot rule out that there may be slight variations in the horizontal component that may vary slightly across latitudes and be responsible for meaningful variations in movement over long distances or time intervals.
Before thinking about this, however, we need to check what sort of variations we are required to explain.
Coriolis forces and the weather
A Coriolis force is assumed to arise from the phenomenon of ‘momentum’ which in turn is a derivative of inertia and if the whole gravitational field is stationary with respect to the Earth’s rotation, then ‘inertia’ is also aligned with the surface movement.
I made some attempt to find out if there really are such things as a Coriolis forces affecting the weather, but got bogged down in circular arguments, ‘arguments from assumption’ and downright contradictions.
I asked AI to explain whether Coriolis forces really did affect the weather. The answers look like they are drawn straight from discussions on physics chat forums.
Cyclones (low-pressure systems) rotate counter clockwise in the Northern Hemisphere and clockwise in the Southern Hemisphere. This rotation is not due to wind patterns alone—it directly results from the Coriolis effect acting on large-scale air movements.
But there are cyclones near the equator and both clockwise and anti-clockwise systems exist in the northern hemisphere.
The Coriolis effect is necessary to explain the direction of rotation; without it, wind would flow directly from high to low pressure.
This is just not true. Stir a cup of tea and you will create a vortex. The pressure gradient goes from high at the periphery to low at the centre but the flow of water is almost at right angles to the pressure gradient and never along it. The same is true of cyclonic structures in the atmosphere.
Trade winds blow from the northeast in the Northern Hemisphere and the southeast in the Southern Hemisphere—again, due to Coriolis deflection.
They may well do this but where is the proof that is is caused by Coriolis forces?
Jet streams—fast-moving air currents high in the atmosphere—also follow curved paths influenced by the Coriolis effect.
Again, we would like some sort of argument to show that the Coriois effect is causal here. An air current cannot just be influenced to follow a curved path; the air either side of it must have somewhere to go to and wherever it goes to must also move the air somewhere else to make room for the new air. The system is organised globally as a series of vortices, this being a necessity for the preservation of topological continuity. The vortex structure dominates the flow patterns and it will be hard to discern or quantify any Coriolis influences within this pattern, particularly when the vortices go round the ‘wrong’ way.
Rotating tank experiments simulate Earth’s rotation and show how fluids (like air or water) develop spiral motion due to Coriolis-like forces.
Yes, but these are rotating tanks within a stationary frame of reference (gravitational field). The whole point of the above arguments is that the Earth’s inertial field rotates of itself, is stationary with respect to the surface of the Earth and therefore not rotating at all for the purposes of laboratory experiments.
The statement “Rotating tank experiments simulate Earth’s rotation” pretty much assumes the thing that is to be proved, which is that small scale experiments can be scaled up to the size of the Earth; they can’t. However, it isn’t the scale that is the problem but the nature of the gravitational field; it cannot act both as a reference frame for laboratory experiments and for the whole planet itself at the same time.
These experiments reproduce cyclonic patterns similar to those in Earth’s atmosphere.
Yes, but cyclonic patterns are produced by the laws of fluid flow and need no rotational impulse to get started; just try preventing water forming vortices and see how far you get.
Major ocean currents (e.g., the Gulf Stream, the Kuroshio Current) follow curved paths and rotate in large gyres consistent with Coriolis deflection.
The movement of ocean currents are very heavily influenced by the shape of the land masses, convection currents and the laws of fluid dynamics.
The Coriolis force is described mathematically in the equations of motion for rotating systems (e.g., the Navier-Stokes equations).
This is theory, not observational evidence, and the whole point of the argument on this post is that the theory is inapplicable, as the Earth is evidently not a ‘rotating frame of reference’, but a ‘stationary frame that rotates’ (within the solar system).
Reminder: Classical theory has yet to explain just what a ‘rotating system’ is rotating relative to; ‘absolute space’ doesn’t really suffice as a get-out clause any more.
Foucault’s pendulum
The rotating plane of swing of a Foucault pendulum is often cited as a triumph of scientific achievement and is claimed to prove:
That the Earth is round
That the Earth is rotating
That the Earth is rotating at a specific rate
That the Earth is rotating with respect to some fixed frame of reference
That the laws of Newtonian physics hold
A single experiment clearly cannot prove all these things at once.
Furthermore, from the Wikipedia article and associated Talk tab, we have:
No pendulum has been seen to complete a single revolution in a single day
A pendulum at the equator is claimed to not rotate at all but this experiment has never been performed
An experiment at the South Pole initially showed the Earth rotating the wrong way round: [link]
A second experiment gave a rotational period of 12 hours instead of 24
Further experiments achieved a rotational period of 24 hours ± 50 minutes
Results deemed to be incorrect were discarded and ‘refinements’ (unspecified) made to ‘improve’ the results
Experiments appear to be ‘goal oriented’
The results they are aiming for assume a spherical Earth, but the Earth is ‘oblate’
The only data claiming to be accurate at other latitudes comes from Foucault himself and he can hardly be said to be impartial.
Only a single latitude was attempted
The swing of the weight is heavily influenced by air currents and initial conditions
An attempt to reproduce Foucault’s experiment demonstrated an initial planar swing degenerating to an elliptical pattern after only an hour
No pendulum will swing all day without ‘help’
There is no quality control on the manufacture of the equipment and one pendulum simply snapped and fell to the ground
A pendulum at the equator would provide a good control but nobody has tried this
A series of precise and reproducible experiments using the same equipment at multiple latitudes is required but never even attempted
Publicly displayed pendulums are made to knock down skittles (see image above) which allows the possibility of controlling the precession to some degree
We frequently see theoretical predictions masquerading as experimental results. For example: “A Foucault pendulum at 30° south latitude, viewed from above by an earthbound observer, rotates counter clockwise 360° in two days.” How do you know this if it has never happened?
“Heike Kamerlingh Onnes performed precise experiments and developed a fuller theory of the Foucault pendulum for his doctoral thesis (1879). He observed the pendulum to go over from linear to elliptic oscillation in an hour. By a perturbation analysis, he showed that geometrical imperfection of the system or elasticity of the support wire may cause a beat between two horizontal modes of oscillation.” – Wikipedia
The plane of swing is affected by an eclipse
The amplitude of swing is affected by an eclipse
The eclipse effect is ridiculed on the Talk page but without further explanation
The ‘fixed frame of reference’ with respect to which the pendulum is assumed to maintain its plane of swing is never clearly identified, nor any mechanism by which a pendulum might interact with it.
Conclusions from experimental evidence of Coriolis forces
The arguments for Coriolis forces at the planetary scale and the scant evidence from Foucault pendulum experiments are insufficient to support the historic claims made for them and at the same time do not contradict the idea of a gravitational field acting as an inertial frame of reference which is stationary with respect to the surface of the Earth.
Gravity as an electromagnetic field
The nature of the field can be largely derived from everyday observations as above, but we can consider the idea that it is in fact an emergent property of an electromagnetic field and equivalent to the sum of all the magnetic dipoles of all the spinning charge comprising the planet. This will provide further insights.
If this is true then the gravitational field is continuous with all the atomic charge fields and hence its movement must necessarily be continuous with the rotation of the Earth. Such a field will have complex, fine grained structure and although diminishing according to radius in the manner of a Newtonian field, will not consist of a simple radial field but will have meaningful horizontal components which give rise to inertia.
Konstantin Meyl posits such a field with his Theory of Objectivity and allows for nothing else existing in reality apart from such a field. A ‘field’ in physics is described by differential equations and obeys the Locality Principle, meaning there is no action at a distance and that all behaviour is determined completely by strictly local field interaction.
It follows from this that the behaviour of water in a spinning bucket is determined solely by local (gravitational) field conditions and is unrelated to any influence from the distant stars or from any such thing as ‘absolute space’. There is no provision within the field equations for any external influence and no need for an independent frame of reference as the field itself provides its own reference frame which is usually of a toroidal geometry.
Newton claims that a body will move in a straight line unless acted upon by a force, but singularly fails to define what is meant by a straight line. From the perspective of a field theory then, we can now invert this proposition and actually define an ‘inertial trajectory’ as that of an ‘unimpeded solid object in an inertial field’. So even geometry is now defined by an observation as opposed to an abstraction.
This formulation has the added attraction that it defines things in terms of observable and hence measurable reality, with no need for the assumption of superfluous variables or entities. Passive gravitational mass is not measurable and the assumption of an ‘absolute’ frame of reference is not only unprovable, but now necessitates an additional explanation as to how such a reference frame should influence physical reality.
A further advantage of the adoption of the description of reality in terms of a single field structure is that it narrows down the possibilities, thereby restricting speculation and discouraging the unrestrained invention of novel and often inconsistent mechanisms.
A complex gravitational field
If we accept the general idea of a field model then there is no such thing as an absolute frame of reference and there is no such thing as action at a distance. All influences are via local field conditions only and so a pendulum is moving with respect to a frame of reference created by the gravitational field itself.
The gravitational field can be seen as an extension of the electromagnetic field of all the matter in the planet and as such will rotate with the Earth and will obey the laws of electromagnetism, which are complex, asymmetric, non-linear. The underlying equations are nothing like the simple radial field of Newtonian gravity but will produce something like a radial field on large scales thereby giving the illusion of something much simpler.
The temptation to imagine these laws operating within some Euclidean space should be resisted. The field at the surface of the Earth operates within the much larger vortex structure of the Earth’s sphere of influence and it is this larger vortex that actually determines the global geometry and no doubt contributes to the local field conditions at the surface.
A self-consistent paradigm
From one point of view, if a pendulum has an apparent deviation from the ‘straight’, then it is subject to some acceleration. However, if we define ‘straight’ as the path actually taken, then no ‘real’ acceleration takes place. ‘Physical straight’ and ‘geometric straight’ are now quite different concepts. Acceleration is ‘the action of an inertial field‘ as opposed to ‘a change in motion‘.
This makes perfect sense and leads to an improved and self-consistent science.
Newtonian and other theories claim matter, mass, distance, position and time as ‘fundamentals’ of the framework, but mass is unmeasurable, the idea of a straight line is undefined, time is ambiguous and even the idea of ‘position’ is unclear (position with respect to what, exactly?). In all cases, quantities are assumed to be relative to some absolute framework that can never be directly measured and is merely imagined.
To use a field construct as a reference frame, however, leads to a self-consistent theory consisting of a theoretical equation for the behaviour of the field and a set of measurements taken from actual reality.
Free movement (free-fall) is that which takes place according to the laws of an inertial frame and is driven by such a frame. A straight line is that followed by a free falling object. The parabolic path taken by a thrown object is inertially straight but geometrically curved because the observer is continually accelerating against the inertial field. The laws of geometry and movement are those of a local electromagnetic field shaped by an enclosing vortex structure.
Applied forces can ‘accelerate’ objects against against the inertial frame. Geometric movement is that which is determined by relative distances, where such distances are themselves determined by the intensity of the field. Geometry itself is determined by the field structure and ‘mass’ is a simplified way of quantifying a vortex; a single metric for a complex structure.
Movement and acceleration are now described in terms of actual physical processes as opposed to deriving from an abstract geometry that resides in some other-worldly realm of ideal forms.
Newton’s first law
A body remains at rest, or in motion at a constant speed in a straight line, unless it is acted upon by a force.
The weakness of the law is now easily seen. The concepts of ‘straight line’ and ‘constant speed’ are ill-defined and so the law makes no sense.
To define these concepts we need some frame of reference by which to compare ‘speed’ or ‘straight’ and no such frames have been adequately described. Newton advocated for some ‘absolute’ frame of reference whilst Mach preferred to compare the local motion of objects to the distant or ‘fixed’ stars, but neither of these is really satisfactory from a practical point of view since neither reference frame is available for direct measurement. Both are simply ‘terminology’ without any real meaning.
As for empirical verification, we can try to find an experiment demonstrating the truth of the First Law; we can look for an object travelling though space in a straight line forever, but no such experiment exists. All objects in space are observed to travel in curved orbits of some sort and all are therefore inferred (from the first law) to be subject to the ‘force’ of gravity.
The reasoning is circular and the idea of an object travelling in a straight line, free from force, is redundant, since no such thing can ever occur in a universe permeated by gravitational fields.
The frame of reference must be the local gravitational field itself; this is by now ‘obvious’.
The Tamarack mines experiment
A wire was measured at the surface of the Earth and again at the bottom of some mineshafts where it was found to be considerably shorter. The reason given by Meyl is that the horizontal component of the magnetic field grows stronger for a small distance towards the centre of the Earth and it is this phenomenon that literally shrinks the wire by manipulation of the physical geometry.
Gravity is therefore more complex than a simple radial field emanating from the centre of a mass.
The sun is said to have very little equatorial bulge despite its large size and gaseous composition and rotates at different speeds according to latitude. This seems at odds with classical physics but makes perfect sense when viewed through the lens of vortex physics.
The sun is the centre of a rotating gravitational field and the surface of the sun is continuous with such a field. The field accelerates inwards and forms one ‘radius’ at the surface and possibly another at the chromosphere. The shape of the sun is determined by the overall configuration of such a vortex which obeys the laws of electrodynamics. Meyl gives a description of an electron as being stabilised by the weight of the whole universe compressing inwards and points out that the sphere is the most stable shape that could possibly result from this.
The same no doubt holds for larger objects and the sun, being gaseous and hence more easily shaped by a gravitational field than a solid planet, ends up being more spherical instead of less.
The gravitational field of the sun rotates with the surface and hence forms a stationary inertial frame of reference with respect to the surface, as with the Earth. There is a big difference here, however, which is that there is no solid body rotation on the sun but a differential rotation that varies with latitude. The question then arises: “What is the behaviour of a Foucault pendulum at the surface of the sun?”. Exercise for the reader!
The Moon and Jupiter
Jupiter has a fast spin and a large equatorial bulge and so this bulge is attributed to the rapid spin. However, the moon has a large equatorial bulge but no spin and so the bulge is attributed to something else other than the spin. The sun has a large mass and size and significant spin but no equatorial bulge but nobody understands this. An obvious inference is that the equatorial bulge is simply unrelated to the mass or spin of the planet in question.
Variation in day length of the Earth and Venus
The rotational speeds of both the Earth and Venus vary from day to day, with the day length of Venus varying by up to 20 minutes. How does this happen?
One explanation is that there is an exchange of angular momentum between the interior of the planets and their surface. In other words, molten iron sloshes around and alters the rate of spin as an ice skater might do by changing her moment of inertia. This is hardly credible, it would mean the transference of angular momentum by mechanical means which would surely lead to all sorts of stresses in the crust of the planets, with tidal waves and earthquakes being an inevitable consequence?
It must be the case that the planets are affected in every single atom at the same time and this implies an inertial field. Each planet is at the centre of an extended gravitational vortex with the vortex having slight fluctuations of rotational speed. Again, this sort of thing is visible in the eddies in river currents. This requires some explanation in Newtonian physics but is to be regarded as default behaviour in vortex systems.
‘Oumuamua
‘Oumuamua and other objects are observed to accelerate away from the sun, apparently against the (Newtonian) gravitational field and various hypotheses are put forward to explain this. A better way to proceed might be to consider a more complex version of the gravitational field as described above and a more complex form of interaction than merely ‘attraction’. It has already been hypothesised that gases may interact differently to solids in a gravitational field and we may be seeing, with these objects, a different form of behaviour again.
Many of these visitors to our solar system have the appearance of energetic field vortices akin to a ball lightning phenomenon. A spinning vortex of pure electric field accumulates energy and matter continually according to the vortex principle and propels itself through space in a manner similar to a smoke ring. Once close to the sun, the dynamic electromagnetic field structure interacts strongly with the gravitational field of the sun and the resulting forces now dominate the movement of the ‘object’. The local gravitational field conditions and the dynamic field structure of the object itself will both contribute towards the movement and again, an analogy with ball lightning is appropriate.
These objects use their internal electrodynamics as an ‘engine’ to drag themselves through a gravitational field. Energy is dissipated in the form of light and matter but they are, nevertheless, at the centre of a larger vortex structure and will continue to accumulate energy as they move through the cosmos. If they did not continually ‘refuel’, then how are there any of them left in the universe?
How do these objects arise in the first place? They arise as spontaneous concentrations of vortex ‘energy’ much the same way that a local vortex may form in a flowing river from the spontaneous confluence of global currents.
Very likely many unidentified aerial phenomena are of this nature and will exhibit complex behaviour in the vortex wake of an aeroplane.
The Michelson Morley experiment
In the Michelson Morley experiment, two perpendicular beams of light were found to travel at the same speed despite the rotation of the Earth and its orbit around the sun. This result is consistent with the idea that the gravitational field at the surface of the Earth is not only inertially stationary with respect to the Earth but also forms a locally isotropic reference field for electromagnetic propagation.
This isn’t too far fetched. A gravitational field is hypothesised to be essentially electromagnetic in nature and photons are some sort of propagating electromagnetic field. The gravitational field therefore acts as a sort of carrier wave for the photons which adjust their speed according to the local environment.
If this is true then gravitational lensing effects are to be expected and these are indeed observed. The gravity in these effects is not acting as an inertial field upon ‘mass’ but as an electromagnetic ‘medium’ which determines the speed of propagation of the photons.
The Lense-Thirring effect
The Lense-Thirring effect is usually described in terms of general relativistic ‘frame dragging’ where a rotating body such as the Earth will ‘drag’ some space-time around with it (how?), thereby affecting the movement of objects and the propagation of light.
This can obviously be reformulated in terms of a pure vortex structure where both Earth and its inertial (gravitational) field rotate as a single body and give us the effects described. In terms of Newtonian or Einsteinian physics, the Earth has angular momentum because of its rotation and this is no doubt the instigator of the dragging. However but the frame of reference with respect to which the rotation is defined is never specified and so we ought not to be assuming that it exists.
We are not therefore able to say with any certainty that it is the frame that is being ‘dragged’, but only that the inertial field and surface movement are continuous with each other. The two move as a whole and it is quite wrong to attribute cause to one or the other when there is no need to do so and no evidence for such a phenomenon.
Summary
An alternative way of thinking about gravity has been described, first in layman’s language and derived from simple everyday observations and experience.
Next, a hypothesis for a gravitational field based upon an electromagnetic field has been shown to be consistent with the theory and to provide additional insights.
Thirdly, multiple known ‘anomalies’ which are incompatible with classical theory are given plausible explanations with respect to this new theory.
The idea of Coriolis forces at the planet’s surface is contested and the evidence from pendulum experiments found to be insufficient to prove anything either way.
The local gravitational field has horizontal components as well as radial and forms a defining frame of reference for the local movement of matter and indeed the propagation of light.
The symptoms of influenza are created by an altered state of biological regulation; this is clear and agreed upon by many sources. The regulatory system of the body is electromagnetic in nature and is therefore susceptible to external electromagnetic influences such as cell-phone radiation and seasonal discharge of EMF from the ionosphere.
There is ample statistical evidence to show correlation between such influences and outbreaks of influenza. However, this seems to be largely ignored by both mainstream and alternative theorists in favour of other explanations, often with little to no supportive evidence or attributed mechanism.
Psychological trauma, viruses, stress and detoxification are a few examples that have been proposed without clear definitions, evidence, purported mechanisms or any validation from historical epidemiological studies.
Moreover, there is now a fashion for trying to say that ‘disease’ does not really exist at all but that symptoms are in fact good things and arise from the body attempting to ‘heal’.. but from what?
This post puts the case that influenza is an altered regulatory state brought on by electromagnetic discharge from the atmosphere. Historical epidemiological studies give support to this idea, as do data showing a correlation between the strength of the Earth’s magnetic field and cases of ‘covid’.
A recent paper from Denis Rancourt and team is consistent with the known epidemiology of flu and is therefore supportive of this hypothesis. The paper posits that pneumonia is caused by ‘fear’ but an alternative idea is that it is caused by tissue deterioration brought on by the sub-standard oxygen found in hospitals.
Influenza as dis-regulation
The condition called influenza is without a doubt one of an altered regulatory state.
The temperature is kept high and within narrow limits. Sleep is disturbed, appetite suppressed, muscles ache and a feeling of lethargy dominates a brain-fogged individual. This state persists for about 5 days whereupon it will suddenly abate and recovery begins.
Nobody believes that these processes are somehow managed by a simple virus particle and even mainstream science explains the symptoms by the reaction of the ‘immune system’ to some assumed pathogen. To rephrase: the symptoms are caused by the body’s own regulatory system.
The purported virus, then, is really just a trigger and the actual cause and maintenance of the symptoms lies within the body’s own regulatory processes. The damage caused by a ‘virus’ is merely local cell damage and everything else is somehow a result of the immune system trying to ‘fight’ the virus.
Many now believe that influenza and other diseases are caused by stress or psychological trauma. They are therefore proposing that the condition arises solely from the cognitive system and is a problem with regulation and nothing else! There is no physical particle to be fended off, no toxins to be rid of and the whole system is merely trapped in an antipathetic ‘attractor state’ which, once ‘resolved’, will allow a natural return to normal health.
None of this contradicts the characterisation of influenza as an altered regulatory state but much of it is at odds with the recorded epidemiology.
The detox theory
A significant minority of people are now claiming that disease in general and flu in particular is the result of a ‘seasonal detox’ whereby the body has stored up toxins in the liver, say, and will expunge them in winter for some reason. The symptoms of flu are said to instrumental in this detoxification process and suppression of such symptoms claimed to be a Bad Thing.
We would like to see:
Some sort of metric for overall toxicity
Correlation between high toxicity and susceptibility to flu
High toxicity preceding flu, lower toxicity afterwards
High presence of toxins in sweat or other excretions
An explanation (teleological reason) for the symptoms
A mechanism for the seasonality of the detox process
Whatever evidence is lacking, the hypothesis still involves the coordinated action of various regulatory processes in order to manage the symptoms and so we are in broad agreement that the symptoms are produced by an altered regulatory state and even that it is somehow initiated by a seasonal trigger.
What is in contention is the nature of the seasonal trigger and whether or not a detoxification does in fact take place.
Bio-regulation is electromagnetic in nature
What is it that is ultimately responsible for bodily regulation?
The usual answer is given in terms of bio-chemistry, i.e. in terms of the concentrations of different chemicals and bio-molecules within the system that act as ‘enablers’ or ‘effectors’. A molecule will send a signal (mechanism unspecified) or act as a catalyst (mechanism unspecified) and a further reaction or signalling takes place as a consequence.
This is all very well but what is the initiating process? If any chemical transaction is effected by a prior transaction then what is it that that causes that prior transaction .. and the one prior to that?
There must be some ‘entity’ that has taken the decision to initiate the whole process and that decision must depend upon some sensory or extra-sensory information since the onset of influenza most certainly depends upon external factors.
Simple molecules cannot make decisions by themselves nor accumulate sufficient information to make such a thing meaningful. Moreover, a human, chimp, mouse or elephant are all composed of the same chemicals and so if regulation was composed exclusively of chemical processes then we would all have the same body temperature and we would all look the same.
Regulation is not just simple chemistry but instead a fully ‘cognitive’ process, a bodily intelligence which operates according to the laws of electromagnetism in a similar way to the brain.
Connection to the cosmos
The body maintains a constant connection to the cosmos and uses this to maintain bodily rhythms. Various ‘signals’ connected to diurnal, lunar or solar cycles are detected by the cognitive system and used to set the phase and cycle length of sleep patterns, digestive processes etc. See: Frank Brown.
Cats are said to behave differently during Mercury retrograde: here
Minor disturbances of such signals can have a large effect on the organism; the dose-response relationship is non-linear and meaningful. Small, almost undetectable signals are processed by a system that is ‘cognitive’ in nature and the output is generally to the benefit of the organism.
The gravitational effects of the moon and certainly Mercury are surely too weak to affect living systems and so the only reasonable explanation for these phenomena is via some sort of electromagnetic signalling. By unknown means, the motion of each of the planets leaves an electromagnetic footprint in the pattern of discharge from the ionosphere.
The epidemiology of influenza
There is a general impression I think, that flu is vaguely seasonal and this, in turn, engenders a general dismissal of the importance of the epidemiology, with many simply claiming that ‘cold weather’, ‘communal living’ and ‘altered eating patterns’ are sufficient to cause pandemic influenza.
The historical records, however, show consistent and complex patterns in the epidemiology:
Influenza occurs in sharp ‘spikes’ with sudden onset in the population and equally sudden disappearance, inconsistent with a viral spread
Flu is approximately seasonal (winter) with slight variations from year to year
There are minor seasons also in spring, summer and autumn
The winter season in the northern hemisphere starts in the tropics and moves northwards
Synchronicity between latitudes is demonstrated
Local outbreaks can be precisely predicted by local weather conditions, e.g. the onset of the monsoon or rapid change in humidity or pressure
There exist correlations with the installation of novel cell-phone technology – 5G and Covid
Local outbreaks can be sharply confined to a relatively small area
There exists a fairly consistent attack rate of about 10% of the population
Rapid changes in metrics (temperature, humidity) seem more important than absolute values
These features need some explanation; if flu is caused by psychological trauma, for example, then why is psychological trauma experienced in such specific locations at such specific times?
Hypothesis
Hypothesis: Influenza is an altered regulatory state resulting from the impact of antipathetic electromagnetic impulses upon the regulatory system. This state may or may not function as a detoxification.
The disturbances from the Earth’s magnetic field show both seasonal variation and distinctive local patterns aligned with changes in the weather. Influenza and weather
5G and ‘covid’
Many studies were made during the covid era of associations between the rollout of novel cell-phone technologies and either cases of, or deaths from, covid a.k.a. influenza.
In the chart below, Magda Havas PhD looked at deaths from covid in US states with 5G and those without 5G and found that on average there were twice as many deaths per million in those states that had 5G installed. [source]
Those dismissing 5G technology as a cause of disease on the grounds that it is non-ionising need to read Frank Brown and reflect that a very small impulse indeed is needed to produce a relatively large effect.
The idea that damage can only be produced by stripping the electrons of atoms is naïve in the extreme. Cellular damage is just damaged cells and does not constitute disease; you can burn a finger quite badly but will not enter a fever state. The damage results from the response of the regulatory system, not the original impulse, which may be large or small.
The disease gives rise to the damage and not the other way around.
Historical correlation with man made EMF
The chart shows inflenza deaths per million in England and Wales from the Stuart-Harris paper: here
A sudden and significant increase in deaths occurs in 1890 coincident with the rollout of mains electricity and hence the subjecting of the population to novel 50 Hz electromagnetic waves.
Another huge spike in deaths occurred in 1918-1919 (Spanish flu) when advances in radio technology exposed members of the military to more radio wave frequencies than ever before. The Spanish flu is said to have started at military bases in the US and to have moved around the bases affecting only military personnel.
Influenza deaths per million – England and Wales 1850-1940
Further rollouts of radio-wave technology are associated with outbreaks of flu or other pandemic diseases.
Connection to weather systems
Many surprising correlations exist between outbreaks of influenza and local weather patterns.
Spatial Variation in Humidity and the Onset of Seasonal Influenza Across the Contiguous United States – [Serman et al.]
Serman et al. found a strong correlation between influenza cases and humidity. See below for the weekly figures for Arkansas. Influenza cases rise slowly with decreasing humidity up to a specific ‘breakpoint’ whereupon there is a sharper increase in cases relative to humidity.
Similar and very striking charts were produced for other states but with the ‘breakpoint’ (here at about 0.05) being slightly different for each state.
The chart shows a relationship between humidity and influenza cases but the hypothesis is that dis-regulation is caused by electromagnetic changes. This page: Influenza and weather argues that changes in humidity are invariably accompanied by atmospheric electromagnetic activity.
Rapid changes in weather conditions are invariably caused by the passing of a weather front. Such phenomena are, in turn, associated with air turbulence and the formation of tornadoes, especially in winter. Whirling air causes the movement of charged particles in the atmosphere, which lead to the emission of electromagnetic pulses and it is these which will disturb the regulatory system.
Changes in the weather
Even after the virus hypothesis had been proposed, many doctors continued to challenge the idea of transmission and to emphasise the connection between the date of pandemic onset and a change in the weather.
Periodic seasonal outbreaks of acute respiratory disease have occurred in the military population in and around San Antonio, Texas, for the past 5 years. In certain instances the epidemic has supervened upon or become worse after a spell of cold or wet weather.
Several organisations have shown simultaneous beginning of outbreak. Eighty percent of 252 consecutive cases gave a history of recent exposure to dampness, lowered temperature, or other adverse influence.
The symptoms of 981 cases of acute bronchitis constituting an epidemic could have been caused either by bacterial infection, by exposure to lowered temperature, or other adverse influences. The distribution and dates of onset indicated that the majority of cases (80 per cent) were provoked rather than transmitted cases.
The onset following so promptly after exposure to lowered temperature (My note: i.e. a change in atmospheric conditions) affords evidence that the early phase of the attack is not due to bacterial activity, but is almost solely to the excessive reactivity of the economy toward the injuriously physical or other equivalent influence.
Page 23.
Richter, of San Francisco, in a voluminous discussion, advances the anticyclonic theory as the prime etiological factor in influenza:
The very source of the pandemics is found to be in that part of Asia Minor or North America on the Northern hemisphere where the centres of highest pressure are located on those parts of the continents. The air, carried by such dynamic anticyclones is distributed with a velocity equal to that of our railway trains.
The cycles in which the pandemics alternate with periods of relative quiescence are distinctly covered by the cycles of high air pressure periods during and before the pandemics and of low pressure following them. The influenza pandemic extends and spreads in the same direction and with the same velocity as the great anticyclone spreads from its centre over a continent. In the United States, therefore, it generally attacks first, those districts that lie in the path of the Alberta type or Hudson Bay type.
We have reason to believe that air of some anticyclones contains ozone in unusual quantity as a product of unusual solar output.
Page 24.
Latitudinal synchronisation
Edgar Hope-Simpson noticed coincidences in both the timing and severity of flu outbreaks in Prague (52°N ) and Cirencester (51°N ). Similar synchronisation across distant parts of the globe is noted by other authors and apparently connects the US to France and India to Europe.
Again, it is not too unreasonable to suppose that influences showing latitudinal patterns are somehow related to local characteristics of the Earth’s magnetic field.
A conceptual vacuum
With the relatively recent realisation that disease is not the result of viral infections, we are left asking: “So what is it then?”. This question now reverberates through a conceptual vacuum, an existing theoretical framework bereft of any means by which to even formulate a decent hypothesis.
Nearly 100 years of virology and associated thinking patterns now must be discarded, leaving us with little idea as to why people get sick and not even a decent vocabulary by which to describe the nature of ‘disease’ itself.
People therefore are grasping at straws, scrabbling for some answer amongst extant ideas or avoiding the question altogether by simply denying that such a thing as ‘disease’ really exists.
Consider then, that if thousands of intelligent scientists are wrong or are mis-interpreting their own results and if common sense explanations of disease are deficient in their relation to reality, that there is something essential missing from our basic understanding of biological systems.
Textbook knowledge seems to work to some degree but consists in large part of a list of observations rather than causal mechanisms. A convincing narrative has been constructed from the patterns projected on the back of Plato’s cave and these have been mistaken for a complete three dimensional description of actual reality.
Books abound on the ultimate nature of biology, of ‘life’ itself, but no ultimate answer is presented. We therefore have a case of an extraordinary problem needing an extraordinary solution. Some lateral thinking (at least!) is needed and we should expect that the eventual answer, if found, will be the one that, far from being intuitively attractive is more likely to be the one that least appeals to us.
It is certainly not necessary to provide an alternative cause of disease to know that disease is not caused by a virus, but the absence of such a cause seems to weigh heavily on the cognitive system and as a consequence, imagined alternatives proliferate in order to fill the void.
One popular alternative is the idea that ‘fear’ is a driving factor behind disease processes.
This is formalised by the German New Medicine group which claims that flu in particular is caused by ‘fear of aggression’, but this fails to explain why we all succumb to such fear on a seasonal basis, how latitudinal synchronisation of such a dysfunction could take place, or why it should coincide so precisely with changes in the weather.
If only fear were a cause of disease, we should surely see the end to all armed conflict in the world, as both military and civilian population would hardly be able to function! Disease would break out at the mere thought of fighting and the fear of disease itself should then multiply and spread like a .. virus! However, no ‘spreading’ is observed in the epidemiology.
We should discard this idea until some actual evidence is presented.
Influenza as ‘just a collection of symptoms’
This idea is that each of the symptoms of influenza separately has an independent cause which is not viral, with the implication that this formulation is sufficient to explain influenza itself as merely a coincidence of such symptoms.
This characterisation, however, is highly simplified description of the aetiology, with no explanation as to why such symptoms form characteristic clusters, why such clusters should appear so suddenly and in a coordinated manner, or why they should follow the typical seasonal and latitudinal patterns that have been observed throughout history.
A characteristic collection of symptoms will appear all at the same time, follow a typical progression over 5 days or so and disappear in synchrony as quickly as they came. The symptoms are carefully choreographed and managed, implying a systemic coordination or regulation.
From this point of view, influenza is a regulatory phenomenon.
It may well be that each symptom has alternative causes, but these causes seem to be invoked annually in accordance with the seasonal and latitudinal patterns that have historically been attributed to a single disease, namely, influenza.
There is something here that needs explaining.
Covid and geo-magnetic activity
The spread of the Sars CoV-2 virus depends on the Earth’s magnetic field – M.I. Orlyuk, A.O. Romenets, 2022
The chart shows the Kp-index of the Earth’s magnetic field in blue (trend line in black), with the Covid incidence rate in red. The Kp-index is a global average I think, which is a somewhat crude metric but the pattern matching is striking, indicating some kind of causal relationship.
On some occasions the rise in covid cases follows the geomagnetic changes but at others it seems to pre-empt them.
It therefore seems unlikely that a decrease in the strength of the Earth’s magnetic field is a direct cause of disease, rather that this is a crude metric which is indicative of some other atmospheric phenomenon which itself is responsible for the disturbances of the regulatory system.
Orlyuk, Romenets
Rancourt et al.
Denis Rancourt and his team made a thorough survey of mortality data pertaining to the 2020 covid pandemic and spotted several striking patterns typical of influenza outbreaks.
Note that the figures were for excess mortality and not ‘cases’. The overall conclusion of the paper is that the excess deaths were mostly caused by faulty hospital protocols. No doubt this is true, but there has to be some reason why people are admitted to hospital in the first place; there has to be some sort of ‘case’. What we see in the data is that the pattern of excess deaths matches established historical epidemiological structures:
Geographic heterogeneity of first-peak period excess mortality: Sections 3.1 and 3.2 demonstrate that there was a high degree of geographic heterogeneity in excess mortality in the USA and Europe, with a handful of geographic regions having essentially synchronous (within weeks of each other) large peaks of first-peak period excess mortality (“F-peaks”) and all other regions having low or negligible excess mortality in the said first-peak period. -Rancourt et al.
Yes, both heterogeneity and synchronicity are expected from influenza. We see synchrony of outbreaks according to seasonal patterns and changes in weather conditions such as the onset of a rainy season. Synchrony is often along lines of latitude, with peaks sometimes within days or even hours of each other.
Heterogeneity exists at all geographical scales from that of a continent to that of a cruise liner or school dormitory (Fred Hoyle).
The incidence of flu has a ‘fractal’ nature in similarity to the nested cyclonic structures of weather systems.
Temporal synchrony of first-peak period excess mortality: Section 3.3 shows that F-peaks for USA states and European countries were almost all positioned within three or four weeks of one another and no earlier than the week of the WHO’s pandemic declaration. -Rancourt et al.
The coincidence with the WHO declaration is interesting and the hypothesis of Rancourt is that the fear of a disease somehow caused sufficient symptoms for a person to be admitted to hospital whereupon they were subjected to harsh treatment that ultimately led to excess deaths.
The chart below is striking and hard to argue with. Note, however, that it is excess deaths and not case numbers that are represented. This is important as, in the chart from Orlyuk and Romenets, we see no apparent rise in cases at this time at all; the number of excess deaths appears unrelated to the number of cases.
The cases are related to the strength of the Earth’s magnetic field but the deaths are as a result of the change in hospital protocols in response to the pandemic declaration.
Dramatic differences in first-peak period excess mortality for comparable cities with large airports in the same countries. -Rancourt et al.
Again, this expected from historical precedent and from theoretical considerations of the hypothesis. Flu is not spread by a virus but is cause by antipathetic electromagnetic weather conditions and these can demonstrate variations and synchrony at both large and small scales. The general pattern of a weather system is a fractal vortex structure, with rapid changes in humidity and pressure coincident with the overall geometry.
Moreover, airports and hospitals are at the forefront of cutting edge technology for internal communication systems. If one airport is using 5G and another is not, then a difference in ‘cases’ is expected, followed by a difference in excess mortality.
An aeroplane flight and a visit to an airport seems sufficient to trigger a case of flu: study
Increased share of deaths occurring in hospitals for jurisdictions with large F-peaks
Where I live, there was a rollout of new cell-phone masts near hospitals and care homes. Nurses told me of an increased use of wireless monitoring equipment within new hospital developments.
Correlations with socioeconomic vulnerability in regions with large F-peak
The paper hypothesises that patients from areas of low wealth were admitted to new hospitals in richer areas and subjected to aggressive treatments such as forced intubation which led to their demise.
It does seem that lower income areas are targeted for the construction of more radio-frequency transmitters whether 5G or not. We have been objecting to masts in large residential estates with little effect but mast applications in wealthier areas seem to be few and far between and are more likely to be rejected.
In the UK, a proposed siting for a mast needs to be a few hundred yards away from residential dwelling for safety. How is this achieved in the middle of a town? The answer is evident in the number of masts that are in the middle of large car parks attached to hospitals, supermarkets or care homes.
We propose that the said observations of geographic heterogeneity and temporal synchronicity of first-peak period excess mortality, .. were caused by region-specific application of first-peak period lockdown policies and dangerous medical-system treatments, including invasive mechanical ventilation
Yes, indeed. Recent papers from Konstantin Meyl and Gerald Pollack have both argued that it is not oxygen we breathe but some kind of electrical energy (Meyl) or electric charge (Pollack). Do we breathe oxygen?
If this is true then it follows that not all oxygen is the same quality and that some may contain more energy than other. Now if the health services are ignorant of this and have the tendency ascribe any failure of treatment to an insufficient concentration or pressure of the gas then we are in big trouble.
Depleted oxygen will be forced into patients lungs with zero beneficial effect and every possibility of causing further damage.
In addition to this, Konstantin Meyl presents a credible model of a gas whereby a molecule has expanded to nearly 1000 times its liquid volume to form a large negatively charged ‘ball’. We must therefore now ask how such a molecule passes through the membranes of the lungs and what happens if it is ‘forced’?
Some of the oxygen pumped into the lungs of patients may be fine but some may be depleted of energy and a decline on health seems inevitable in these cases.
We argue, following Rancourt (2024), that pneumonia induced by biological stress of lockdowns and medical-system intervention was ultimately responsible for the very large first-peak period excess mortality that occurred in hotspots such as New York City, Lombardy, Madrid, and London, UK
In view of the previous discussion, we can now make a hypothesis that the lining of the lungs is nourished directly by the energy of the inhaled oxygen and that if such energy is at a low ebb then the health of the patient’s lungs will deteriorate and tissue will start to die. Bacteria proliferate, thriving off the dying cells and resulting in the condition we call ‘pneumonia’.
This can happen through either the forced inhalation of compressed air or through the breathing of the ambient hospital air, depleted through constant recycling. All that is required for a cure is exposure to fresh air, energised by sunlight and ionospheric discharge
Potential vortex, newly discovered properties of the electric field are fundamentally changing our view of the physical world – Konstantin Meyl https://www.meyl.eu/go/indexb830.html
An anecdotal description of a distinctive epidemiological pattern for the common cold suggests the possibility of contagion. Contagion is via bio-field transmission and requires sustained physical proximity or contact. Infectiousness is proportional to intensity. There is no incubation period. The hypothesis finds some support in the results of clinical trials.
Anecdotal evidence
The evidence is from my own personal experience. I hardly ever get ill and cannot remember the last time I visited a GP. I will, however, get the occasional mild cold and the circumstances are almost always the same.
My wife, ‘J’, who almost never gets ill, will visit London for a day trip and return, more often than not, feeling tired and unwell. The following day she will announce that she has a cold and I will likely start to feel unwell too. I will admit to having a cold the following day and take a day or so off work. This pattern has been fairly consistent for the last 40 years.
I used to think that the unusual viral flora in London was the initial cause and that maybe some apprehension on my part contributed to a lowering of the immune system. I no longer believe in viral transfer and am confident each time she returns, that I will not get sick as a consequence; and yet I still get sick.
This has happened one too many times now for me to think that it is a coincidence and so I am prepared to entertain the idea of some sort of contagion.
I will occasionally contract a cold by some other means and J will often then succumb the next day. I can’t remember an occasion when we both got a cold at the same time and it seems to me that the first person to get the cold suffers a little more, with the ‘recipient’ having a milder complaint.
The pattern is so distinctive and persistent that some explanation is demanded.
Hypothesis
A cold is a disturbance of the bio-field, a regulatory disorder
It can be transmitted by sustained proximity e.g. sleeping in the same bed
Contact between areas of skin may be a factor in transmission
There is no incubation period as such
The transmissibility of the disease is related to its intensity
Key points
The lack of incubation period helps to limit the spread of the disease; people get sick within hours and stay at home instead of spending 10 days wandering round in an asymptomatic state infecting other people.
The idea that intensity of disease is related to transmissibility is new and not quite the same as that of ‘viral load’. A major disturbance of the bio-field leads to ‘stronger’ or ‘more destabilising’ emissions which are more likely to affect another person. People who have the stronger emissions, however, are also those who are more likely to stay at home, thereby further restricting the spread.
On the other hand, those who remain active are the ones with the milder disturbances which are now less contagious. This is somewhat counter to the viral hypothesis where even an asymptomatic ‘super-spreader’ can trigger a global pandemic.
The above points are likely major confounding factors in the epidemiology; ‘spread’ is very limited.
Transmission requires sustained periods of exposure to an antipathetic bio-field, thereby further reducing the chances of transmission. The Common Cold Unit did all they could to prevent transmission this way by insisting upon a separate bedroom for each patient.
Transmission via droplets either on surfaces or in the air may be possible if there is some bio-field activity left in the droplet, but close, sustained contact with an entire human body will have more of an effect.
Biological synchronisation
The idea that biological information can be transferred from one person to another independently of the transfer of any material substance will be hard for some to accept. However, there are multiple papers on this, for example:
The authors found that not only did heart rhythms synchronise, but sleep phases as well. The coupling between individuals was found to be not a phenomenon of mere resonance, but a interaction between the complex systems regulating such processes: “Synchronization is a phenomenon of adjustment of rhythms due to interaction between periodic or weakly chaotic systems (Pikovsky et al., 2003)”
This is the transfer of meaningful biological information.
Sleep synchrony was greatest when couples shared a bed: “A recent study demonstrated significantly more synchronization of sleep stages when couples slept together than when they slept in separate rooms “
More anecdotal evidence
A woman has four children, all boys and the two youngest share a double bed. The children will invariably get sick one at a time about 24 hours apart.
When my (4) kids get the flu, they all catch it one after the other; I take care of them while ill, but rarely catch it myself although it might occasionally happen; my husband whom we share the house and the meals with, but has his own routines and rhythms (that is – acts per se in the family rhythms), never ever “catches” our kids’ flu.
1. The second kid that “catches the cold” after his brother shares a double bed with him.
2. Then – and not always! – and with much less intensity gets ill the other kid that doesn’t share room with the first two but does share everyday life experiences with his brothers.
3. The older one, who shares room but not bed, neither activities with his younger brothers rarely “catches” it. A few years ago, before entering the teenage years, he was also “part of the pack” and the contagion patterns were different, that is: much more similar to point 2.
Can you say anything about the timeline? Do kids catch it within days, hours or weeks?
Roughly 1 day (20-24 hours) apart.
Another anecdote
A young man works with disabled children and spends a large proportion of his time with one child in particular. There is a lot pf physical contact and as winter approaches the child will succumb to diseases typical of care homes such as colds, flu and vomiting sickness. His carer will invariably succumb soon afterwards, with the same specific symptoms.
The carer will take some time off to visit his family and the following day his mother will display symptoms but never his stepfather. Some family connection, emotional closeness or maybe bio-field similarity seems to be implicated.
Similarity to shedding
These stories seem very similar to those narrated by victims of vaccine shedding: Vaccine shedding. Certain individuals seem to transmit and others receive, whilst proximity and relationships seem implicated. In some cases of shedding a woman will suffer symptoms immediately upon being close to her husband in the evenings, only to feel relief when he goes back to work the next day, only to repeat the next evening.
Clinical trials
If catching a cold is so easy, there should be plenty of clinical trials demonstrating this. However, things are not quite that simple, as almost all studies will assume at the outset that contagion is likely and in addition, that the cause is viral transmission.
Almost any coincidence is interpreted as transmission, seasonal factors are ignored and attention is focused upon a restricted range of specific vectors (surface droplets, aerosol) to the exclusion of others. If surface transmission fails then airborne transmission is assumed and if sustained proximity is a factor then airborne transmission is assumed.
Nevertheless, we can still glean something from these studies once these biases are recognised and adjusted for. If sustained proximity is claimed as a factor but no virus was isolated, then we are nevertheless permitted to conclude that: sustained proximity was a factor.
Shared air, holiday colds and fresher’s flu
Shared air seems to be a factor in the transmission of colds and again viral transmission is usually assumed, but other explanations are available.
If a classroom of children are breathing the same air over and again then one possibility is simply that the air has become stale or depleted in some sense. The post: What causes pneumonia? suggests precisely this, that some vital energy has been removed from the air and it is this lack of energy that is responsible for deterioration of the respiratory tract.
Is this phenomenon partly responsible for ‘holiday’ colds, whereby people will succumb either before or after a break, but in either case, after spending a few hours breathing shared and recycled air on a plane in close proximity to a host of other passengers?
Is ‘fresher’s flu’ the same phenomenon?
Seasonal factors
Colds are heavily seasonal but this is never considered a factor in studies as it it always assumed that transmission is via a virus which can spread more or less independently of the time of year. This has never been demonstrated of course and never will be if the existence of a virus is always assumed at the outset, as this assumption automatically invalidates the proof.
An alternative cause for a common cold should at least be considered and since there is a strong seasonal pattern, this alternative should have something to do with the seasons. Associations with changes in the weather further suggest that such changes may actually be causal somehow, particularly if we are now doubting viral infection as a cause.
The post: Influenza and weather suggests a direct destabilising of the regulatory system by atmospheric EMF which produces outbreaks of disease which are geographically and temporally limited in their scope.
This possibility needs to be taken into account when performing clinical trials. The hypothesis here is that a cold is caught and produces symptoms within a few hours and that there is no such thing as asymptomatic transmission. Studies which appear to show contagion over several days or weeks are likely just picking up random cases caused by the seasonal effects mentioned above.
This is not contagion, but a primal cause and patients can be isolated all you like, but nothing will stop them getting a cold short of a solid lead Faraday cage.
Longer periods of study will necessarily yield more cases during flu season, but an off-season study will likely fail in this respect and for these reasons. From the point of view of viral transmission trials then, we expect to find studies that seem inconsistent and hard to interpret and we do in fact find them.
AI research
Time to look at clinical trials for support or contradiction.
I asked an AI engine: “Please list all studies where sustained close proximity was a salient factor in the transmission of the common cold”
AI summary
Sustained proximity (hours of being in the same room, regular contacts in household or school) tends to increase risk significantly. The card‑playing experiments and the household transmission studies are prime examples.
However, some studies show that even when people cannot touch their face (so reducing contact route), being in close proximity (shared air) over time still leads to infection. E.g. Aerosol transmission of rhinovirus colds with 12 h shared time. PubMed
The school study shows that even tens of minutes of close proximity per day can matter, but in that setting, the cumulative exposure in shared classroom air (many hours) seems more important than just minutes of close contact. PMC+1
The review (Transmission route of rhinovirus) also notes that while contact/fomite routes are proven, in many indoor settings airborne/small aerosol transmission with proximity and shared space seems to dominate. ScienceDirect
References from AI search
Aerosol transmission of rhinovirus colds (playing cards together for 12 hours) PubMed+1
Donors (infected) and recipients (susceptible) played cards together for ~12 h in enclosed setting. Some recipients were restrained (couldn’t touch their faces) to isolate aerosol vs contact routes. PubMed
High rates of transmission even among those who could not touch faces. Results suggested that over sustained close-proximity (12 h in same room, shared activity), aerosol transmission was strong; fomite-only route (via surfaces heavily used for 12 h) showed no transmissions in that particular experiment. PubMed
Rhinovirus transmission within families with childrenPubMed
Families were monitored for ~3 weeks after an index child had rhinovirus; includes frequent, close interactions among siblings and parents in a home environment. PubMed
Transmission was common: siblings had ≈1 case per person when index positive; parents lower but substantial rates. Suggests that in household settings (which imply sustained close proximity), spread is frequent. PubMed
Hand‑to‑hand transmission of rhinovirus coldsPubMed
Short hand‑contact exposures (10 seconds) between infected & susceptible; this is more immediate than “sustained”, but still implicates repeated close interactions. Not so much long‑duration proximity but frequent enough contact. PubMed
Found that hand contact is very efficient in causing infection compared to large/ small particle aerosols in their setup. PubMed
Systematic review: Transmission route of rhinovirus (2022) PubMed+1
Surveyed many studies, some in natural settings (homes, schools, workplaces) where people share space over sustained periods. The review seeks evidence of how close proximity plays a role. ScienceDirect+1
Conclusion: moderate evidence that airborne transmission (large or small aerosols) is a major route; less strong evidence that hand/fomite + self‑inoculation dominates in real indoor settings. But many studies in shared spaces (which by necessity include sustained proximity) show transmission. ScienceDirect
“Relative contribution of close‑proximity contacts, shared classroom exposure and indoor air quality to respiratory virus transmission in schools” (Swiss secondary school)PubMed+1
Over 6 weeks, students (14‑15 y) wore sensors that recorded daily time spent within ~1.5 m of others (“close proximity”); also shared classroom hours and monitored air quality. Students spent ~21 min/day in close proximity; many more hours in shared classroom. PubMed+1
Time in close proximity modestly associated with higher transmission risk (rate ratio of ~1.16 per doubling of close‑proximity time). But shared classroom time (sustained exposure in closed space) and poor air quality were stronger predictors. This suggests that sustained shared‑space exposure matters more than fleeting close proximity. PubMed+1
Summary
Evidence which is anecdotal but nevertheless forms a distinctive epidemiology leads to novel hypotheses for the mechanisms of initial acquisition and transmission of a common cold. These hypotheses lead to predictions for the general epidemiology that seem consistent with existing studies.
These mechanisms predict that:
Contagion is related to prolonged contact or close proximity
Infectiousness increases with severity of symptoms
Each transmission reduces infectiousness thereby limiting spread
Seasonal effects appear spontaneously in a population
There is little to no incubation period
Droplet transmission is irrelevant
The sustained breathing of depleted air causes sickness
The sharing of air in classrooms leads to depletion
The phenomenon of group sickness gives the impression of contagion
Note that one of the main reasons that people believe in contagion is that many people are observed getting sick at the same time. However, here we find that major outbreaks are in fact caused by non-infective agents (atmospheric effects) and that cases of actual transmission have a lesser impact on the overall epidemiology. Unsurprising, then, if there has been some confusion over this issue.
The formulation of gravity as a ‘force’ that acts upon the gravitational mass of an object is not supported by experimental observation and leads to theoretical absurdities. The ideas of force, mass, acceleration and even ‘movement’ are ill-defined, vague and not experimentally verifiable.
This post points out the anomalies, the redundancy of the concept of gravitational mass and the inadequacy of Newtonian theory even as a practical measurement system. An alternative way of looking at gravity is proposed which is intuitively superior, theoretically consistent, computationally identical to Newton’s theory, eliminates superfluous variables and provides for a definition of ‘movement’ (and hence ‘acceleration’) as being relative to the local gravitational field.
The narrative
The accepted mechanism of Newtonian gravity is that all objects possess an intrinsic property called ‘gravitational mass’ and that the Earth’s gravity acts upon that mass to produce a ‘force’ which pulls the object downwards. The more mass, the greater the force, which means that one object having twice the mass of another will experience twice the downward force. This downward force results in an acceleration of the object towards the Earth.
All objects fall with the same acceleration
There seems to be experimental evidence that all objects released above the Earth’s surface will fall to the ground with the same acceleration regardless of their presumed mass and that any difference in their speeds is down to air resistance only. Wikipedia
Since all objects in these experiments behave identically regardless of their (gravitational) mass, we cannot deduce anything at all concerning the mass of an object by observing the acceleration of that object in a gravitational field.
We cannot empirically verify the relationship between gravitational mass and downward acceleration because there is no measurable relationship.
This is unarguable.
Theoretical concerns and ‘inertial mass’
Newtonian theory now suggests that there exists another type of mass, an ‘inertial’ mass which ‘resists’ the hypothetical downward force from gravity in exact proportion to such a force. This is the explanation as to why all objects fall with the same acceleration despite having different masses; the inertial mass and gravitational mass are the same and so they both cancel each other out: NASA
From NASA: “(The theoretical) mass of the object does not affect the motion“
Mass is irrelevant according to both theory and experiment
So according to theory, the acceleration is constant and independent of mass. Moreover, according to experimental findings, the acceleration is constant and hence independent of mass.
We therefore have a theory of gravitational mass that has not been verified by experiment and where such experimental verification is actually ruled out by the theory itself!
Therefore, there is not and cannot be, any meaningful discussion of the effects of something called ‘gravitational mass’, because there are no such directly observable effects and nor can such effects be inferred from theory.
Gravitational mass cannot be said to ‘exist’ in any meaningful sense of the word and it follows that the gravitational ‘force’ that is said to be associated with it cannot be said to exist in any meaningful sense of the word.
The downwards acceleration cannot be said to be caused by a ‘force’ and cannot be said to be connected to such a thing as gravitational mass.
The uselessness of Newton’s second law in this respect
The NASA paper gives Newton’s second law of motion as somehow describing the motion of a free falling object:
force = mass x acceleration
This looks more like a definition of something called a ‘force’ than an equation telling us how an object moves, but we can rearrange it to look like this:
acceleration = force / mass
But the NASA paper concludes: “The mass, size, and shape of the object are not a factor in describing the motion of the object“.
We have a nice looking equation, but what use is it? In order to calculate the acceleration we need first to know both the force and the mass. However:
The mass cannot be determined empirically (see above)
There is no way to directly measure the ‘force’ on a free falling object
The acceleration has been empirically determined to be the same for each object
The Newtonian system is formulated around the idea of mass and force as fundamentals and wants to use these as a basis from which to try to calculate secondary quantities such as acceleration. The force and mass are assumed to be the ’cause’ of the acceleration.
However, the only quantity here that is directly measurable is that of acceleration and so why not take this as a fundamental of the system and derive the other quantities from it? The problem is that the acceleration is constant, which means that if this is the only thing that we can measure then there is no chance of deducing anything at all concerning the other quantities and no way to verify Newton’s laws as applying to falling objects.
The ambiguities of Newton’s first law
Newton’s first law from Wikipedia: “A body remains at rest, or in motion at a constant speed in a straight line, unless it is acted upon by a force.”
This is where the problem lies.
It is simply decreed without justification or precise definitions that if a body is accelerating, then there must be a force acting upon that body. A free falling object is therefore assumed to have a force acting upon it and so even though no force is felt and no force is measurable, a force must be conjured from thin air; the result is the ‘gravitational force’.
Moreover, what does it mean to say that a body ‘remains at rest’? At rest with respect to what exactly? Any object at the Earths surface is said to be rotating with the Earth at thousands of miles per hour and is moving through space at even greater speeds. No object that is observed to be at rest with respect to the Earth’s surface can be honestly said to be ‘at rest’ and so what does the term mean? What is meant by ‘motion in a straight line’ under these circumstances?
What is ‘position’?
There seems to be an implicit assumption that the physical world is superimposed upon some Cartesian grid which serves as a reference frame for position and hence velocity and acceleration, but no such construct has been shown to exist or to be empirically measurable and therefore deserves no place in a theoretical model of the physical world.
Other theoreticians imagine that ‘position’ can somehow be measured with respect to the distant stars and galaxies, but at the same time say that these do not have fixed position and are in fact moving away from us at ever increasing speeds.
Consider what happens when an object is ‘dropped’ in a free falling space station, it doesn’t move with respect to the observer and so cannot be seen to have any forces acting upon it. Advocates of Newton will say that it does have forces upon it and that these are causing it to accelerate towards the Earth. However, the astronauts will not feel any forces upon themselves, cannot measure such forces, cannot directly measure their own acceleration, will not be able to relate any movement (there is no observed movement) of the object to the mass (mass is unmeasurable) of the object.
The astronauts will therefore not observe, and cannot measure any force upon the object. We have a ‘measurement system’ where literally none of the required variables can actually be measured.
A system of measurement?
Newtonian gravity as a description of physical reality seems totally inadequate, but what about regarding it merely as a System of Measurement, i.e. a system of well defined measurement techniques and equations to be used to solve practical engineering problems?
Wikipedia defines a System of Measurement thus: “A system of units of measurement, also known as a system of units or system of measurement, is a collection of units of measurement and rules relating them to each other. Systems of historically been important, regulated and defined for the purposes of science and commerce.”
This sounds like a good idea but the problem with the theory of gravity in this regard is that the fundamental ‘measurable’ of the system is the acceleration of the object and not the mass or force. In fact, both the mass and force are shown above to be unmeasurable and irrelevant to the equation of motion.
The acceleration is not just the fundamental measurable of the system, but the only measurable of the system. An equation of motion in a uniform gravitational field reduces to:
acceleration = g (a constant)
No masses or forces are needed here.
If the gravitational field is variable, the the equation remains the same but with a variable value for ‘g’. Moreover, the value for ‘g’ will be determined by first measuring the acceleration of a free-falling object and inferring ‘g’ from the acceleration and not the other way around.
As far as our system of measurement goes, we only need acceleration as a measurable, with both mass and force being secondary (derived/imaginary) quantities.
An argument for the irrelevance of mass
I forget where this idea comes from:
Consider two apples of equal weight falling towards the ground. They fall at the same acceleration. Move them closer together so that they touch and nothing changes. Now glue them together so that they become one large object of twice the volume/weight/mass. Nothing changes and they continue to fall at the same rate; the amount of ‘matter’ present is irrelevant and the acceleration is always the same.
A field of acceleration?
The results so far suggest that the Earth is surrounded by something we might call a field of acceleration, which causes untethered objects to move towards it with a fixed acceleration.
We can think of an analogy with a river which moves objects downstream regardless of their size or weight. No floating object feels that it is being dragged and none feel a ‘force’ pulling them along. However attempts to pull an object against the stream will certainly require the application of force.
The force needed to pull an object up or down the stream is the force needed to overcome the drag produced by the water and will be the same as the force needed to pull it left or right towards a bank. To rephrase, the force is needed to change the velocity of the object relative to the local flow of the water.
We can therefore consider that the force needed to accelerate an object in a gravitational field is proportional to the attempt to move it relative to the local gravitational ‘flow’.
A gravitational field can be thought of as flowing inwards towards the Earth from space and increasing in its accelerative potential as it nears the Earth’s surface according to an inverse square law. It will ‘drag’ any object towards the Earth in accordance with the local field value of at that point.
Problems solved so far
All problems are solved already.
There is no requirement to create a fictitious quantity called ‘gravitational mass’ only to have it cancel out in the math.
The constant acceleration near the surface of the Earth is regarded as a fundamental of the physical theory and of the system of measurement. Moreover, it is in fact measurable!
Experiments performed in a space station or falling lift are now explained naturally without having to find a balance of complex forces in order to explain a floating object. All objects including the observers are in a force-free space and this is evident by the fact that objects simply float around in mid air.
Acceleration and movement are described relative to local field conditions only. There is no need for a Cartesian grid at the base of physical reality and no need to take into account the movement of distant galaxies. Objects move according to the local gravitational field and any deviation from this movement requires the application of a ‘force’ and so a modified version of Newton’s Law is easily formulated:
“A body remains at a constant speed relative to the local field, unless it is acted upon by a force.”
The phenomenon of ‘weight’ is explained by a scales having to drag or push an object upwards against the local (downward) field flow. The phenomenon of inertia is explained similarly by ‘field drag’; the object is being accelerated against the local field and a force is required. We would expect that in a space station or falling elevator, it would be equally difficult to drag objects in any direction, but it would be nice to see some verification of this.
The equality of inertial and gravitational mass implies that the field is somehow isotropic; it is as much effort to drag the object sideways as it is to drag it upwards (prevent it falling downwards). Compare with dragging an object through a river.
If a deformable float is dragged through a river, it deforms, whereas if it is simply allowed to float downstream, it maintains its form. Similarly, if a balloon full of water is allowed to fall freely in a gravitational field, it maintains its shape, but attempts to accelerate it against the field flow by hanging it from a string or pulling it along a friction-free surface, will cause visible deformation.
We feel heavy because every part of us struggles to move upwards against the constant downward acceleration of gravity. Astronauts in space, however, are moving with the local field flow and hence feel no weight; they are weightless.
An overall vortex structure
The field can be thought of as having an overall spherical vortex structure which intensifies towards the Earth according to the familiar inverse square law. Imagine water flowing down a sink hole to get a picture. The intensity of the field is proportional to the acceleration of matter which increases towards the Earth in the same way that a twig might increase in speed as it flows towards the whirlpool centre.
The intensity of the field is at a maximum at the Earth’s surface and then reduces in a linear fashion towards the centre of the Earth to become zero at the centre. This is the same pattern as the vortex flow in a tornado. The field is rotating at the Earth’s surface at a rate of 360° per day and this ensures that objects released above the surface fall directly downwards and do not drag behind the planet’s rotation. Again, a constant acceleration is maintained relative to the field.
‘Field movement’ and ‘acceleration’ are towards the Earth but intensity diminishes towards the centre of the planet so there is no infinite accumulation of ‘field substance’ at the centre. This may seem odd, but compare with the almost universally accepted explanation of a gravitational field which is continuously ’emitted’, with no explanation of how such emission takes place or how an infinite ‘source’ of such a stuff could exist. Moreover, the field is assumed to somehow move outwards whilst pulling objects back inwards by influencing their unmeasurable (non-existent) ‘gravitational mass’.
The understanding of ‘field movement’ is by analogy with a water wave in which the wave itself appears to move in a particular direction with a particular speed, but no linear movement of the water itself is present. The wave ‘moves’ but nothing really goes anywhere and so there is no need for a ‘source’ of such a field and no infinite sink needed to dispose of the excess.
Variable day length
The length of an Earth day varies on timespans of only a few days (Wikipedia). The day length on Venus can vary by up to 20 minutes. Explanations are in the form of either external forces generated by the other planets or internal forces arising from the motion of liquid metal in the planet’s core. In neither case is it explained how such forces can act upon a whole planet at once without causing catastrophic deformation of the crust and consequent earthquakes.
The problem, then is in attributing the variable rotation speed to things called ‘forces’. Given the hypothesis outlined above, we can now consider that the variable rotation arises from variations in the behaviour Earth’s gravitational field itself and it is this field and these variations which affect the rotational speed of our planet.
Gravity pulls objects directly downwards, towards the centre of the Earth, and not at an angle determined by the rotational speed. If we forget about momentum for a moment (too Newtonian), this implies that the Earth’s gravitational field is rotating along with the surface of the Earth and is continuous with it. We could actually say that it is this gravitational field that is ‘causing’ the Earth to rotate, or maybe that the field preserves the constant rotational acceleration in the same way as it preserves the constant linear acceleration of a falling apple.
If we try to explain the variable rotation in terms of ‘forces’, we need huge forces to move the whole planet. However, an explanation in terms of an acceleration field is, by its very nature, independent of the mass of the planet and arises simply from the dynamics of vortex flow. To get a visual picture, watch some eddies in a stream and observe how their local activity fluctuates slightly in response to both the proximity of other eddies and global changes in the flow as a whole.
In classical physics, gravity, energy and matter are all separate entities and the theory of physics is all about describing how these entities somehow affect each other in a meaningful way. In the vortex physics of Konstatin Meyl, however, even electrons and other fundamental particles are formulated as simple field vortices with energy, matter and mass being emergent properties of the underlying field, the same way that a water vortex is not a separate entity of itself, but a manifestation of the underlying properties of water.
The Earth’s gravitational field, then, spirals inwards from the cosmos and at the Earths surface, fine grained structure appears which is interpreted as ‘matter’. This matter is not separate from the field but ‘is’ the field and the rotation of the Earth is not ’caused by’ the field but is synonymous with it. The persistence of rotation arises from the properties of the field and is formulated as ‘angular momentum’ in classical mechanics.
What is ‘momentum’?
The accelerates objects downwards towards the Earth’s surface because the ‘field movement’ or accelerational component of the field is at right angles to the Earth’s surface and moves along with it. The horizontal component of such a field is zero with respect to the Earth’s surface.
A thrown object will maintain a constant speed relative to the Earth’s surface will therefore maintain a constant horizontal speed and this is interpreted as momentum in classical mechanics. Momentum, mass and inertia are therefore not intrinsic properties of a moving mass but illusions created by the interaction between the ambient gravitational field and the field structure of the object itself.
No Cartesian grid?
There is no underlying Cartesian grid to physical reality; all movement and acceleration are with reference to the local field conditions. There is no need to hypothesise some independent entity called ‘space’ and no need to hypothesise any absolute metrics of distance or even time as all of these are not fundamentals of reality but measurement artefacts that are dependent upon both local field conditions and the precise mechanism of measurement.
‘Distance’ is the length of a ruler, a physical object. Such a length will vary according to ambient field strength (Tamarack mines experiment) and so the distance metric will necessarily vary. The overwhelming desire for an invariant form of ‘length’ in the form of an invisible entity called ‘space’ or even ‘aether’ has caused physicists to assume the existence of such a thing with no proof and to the detriment of scientific progress.
Mach’s principle
How does an object ‘know’ when it is rotating? What is its frame of reference and how do centrifugal forces arise?
The frame of reference is the ambient gravitational field and acceleration is relative to this field as in all cases. The illusion of centrifugal force arises from movement against the local gravitational flow, just as with a falling object.
Gravity as an electromagnetic field
The idea that gravity is in fact an electromagnetic field has been floated by several people including proponents of the Electric Universe model and German physicist Konstantin Meyl.
Meyl gives a modified version of Maxwell’s equations to describe the field as the cumulative average of all of the magnetic dipoles of all of the fundamental particles which constitute the body of the Earth and any object within its ambit. Calculations are given in his book “Scalar Waves: A first Tesla physics textbook for engineers” which give quantitative support to this hypothesis.
What is interesting is that descriptions from Meyl based upon a theory at the atomic level, seem entirely consistent with the model described presented above. The laws of physics are the same at all scales of reality and so careful interpretations of macro phenomena can lead to valid hypotheses concerning reality at the atomic level.
A brief note on causality
Newton’s first law: “A body remains at rest, or in motion at a constant speed in a straight line, unless it is acted upon by a force.”
Note the implication here of causality; a force is causing a body to change its customary motion and if a body is changing its motion then there must be a force acting upon it.
How do we test this? How do we quantify the forces and accelerations?
Newton’s second law in mathematical notation:
force = mass x acceleration or acceleration = force / mass
Note the lack of any sort of causality. We just have mathematical equality in equations where manipulation is according to the laws of mathematics and not the laws of causation. The equations can be reversed left to right and divided either side and the ‘meaning’ remains the same.
There is no symbol for ’causes’ in classical physics, but the equations are always interpreted as somehow encapsulating causality. We therefore have a theoretical framework that is incapable of expressing one of the main ideas of its own inception.
This inevitably leads to confusion. How can we ever prove that it is a force which is causing the motion as opposed to the acceleration of a mass which is causing an apparent force? If a force and acceleration are always co-present then in what sense can one said to be ‘causing’ the other?’. If we can get by with a mathematical framework that does not include the idea of causation, then why did we need such an idea in the first place?
Newton has chosen to essentially invent the concept of a force as being somehow ‘causative’ (of a change in movement) in the Universe but he could just as well have decided that ‘acceleration’ was a fundamental property of objects near a mass and that such an acceleration, if opposed, would lead to a measurable force. The mathematical theoretical framework, containing no concept of causality, cannot possibly refute this idea and so we are completely justified in conceiving of a universe where ‘acceleration’ is primal and (inertial) ‘forces’ are a secondary epiphenomenon.
Summary
The idea of Newtonian gravity as arising from a ‘force’ exerted upon a gravitational mass has been shown to be nothing more than an intellectual conjuring trick, with the mass itself acting as the MacGuffin, a beguiling distraction which has nothing to do with the mechanics of the trick itself and is, in this case, not measurable, not observable and not computationally relevant.
A new way of thinking about a gravitational field has been described which:
Eliminates the anomalies of the Newtonian system
Has no surplus variables
Has no theoretically unmeasurable quantities
Is computationally identical to Newton’s system
.. and is therefore consistent with existing experimental results
Is less confusing to think about
Is consistent with the idea of gravity as an electromagnetic field
Is consistent with the bottom-up theory of Meyl
Is consistent with the thought experiments of Einstein
Relies upon local field conditions only
Requires no imaginary Cartesian grid
Defines ‘movement’ relative to the local field
Has been derived from observations at the macro scale
The images below are from electron micrographs of biological tissue. The tissue has been frozen, sliced, stained and finally subject to an electron beam.
The images are claimed to represent nuclear pores seen from different angles but there are good arguments from Harold Hillman to suggest that this is not the case and that what we are seeing are merely artefacts of the microscopy process.
The artefacts on the right consist of what look like small toroidal structures comprised of a dozen or so smaller torii seen at right angles.
Fractal electron ring vortices
In the image below from a video by Bob Greenyer we see the results of electron streams impacting a metal plate(?) or something similar.
The patterns look like the results of the formation of fractal ring vortices in the substrate. A clear ring is seen, as is a subdivision into numerous smaller elements which could also be interpreted as ring vortices.
The similarity to the images above from Hillman is perhaps confirmation of his claims and we can see now that the patterns observed originate not from the tissue at all but from the electron beam itself even before it has impacted the material on the microscope slide.
Electron micrograph images therefore seem more closely related to the fundamental laws of field physics than they are to the structure of a living cell. Certainly the shapes seen above did not exist in the sample prior to being ‘photographed’ and are completely fabricated by the process itself.
These seem natural if regarded as electromagnetic field structures, as the field laws are (almost) scale invariant. If a ring vortex can form at a large scale then it can form at a smaller scale.
Time to consider that what an electron beam really consists of is a stream of charged electromagnetic field vortices which have been mis-interpreted as a stream of charged particles.
Field vortices are slightly different to particles, though, in that they are:
Mutable (they can alter their shape)
Possess an intrinsic energy
Consist of both electric and magnetic fields
Maybe the electrons leave the generator one at a time looking like particles, but they soon use their energy and magnetic fields to organise into a least energy solution appropriate to their new environment which consists of filaments of flowing ring vortices.
The principle of energy cascade drives energy inwards and there is nowhere for it to go so it compresses down into smaller and smaller vortices to form the fractal structures illustrated.
These ring vortices are formed in the beam independently of the target substrate but form similar patterns whether they impact a metal plate or biological tissue.
Longer exposure times create more fractal structure
Biological samples are often stained with metallic dyes and subject to a powerful beam for several seconds or longer.
If the beam is left on for extended periods, more detail becomes apparent: “The results presented here suggest that longer electron beam exposure times provide more electron densities of bio-materials analyzed by TEM imaging, ultimately resulting in optimal visualization of their detailed structural features. “
We can re-interpret this now as meaning that the effect of an electron beam is to arrange particles of metallic dye into stereotypical vortex structures and that the longer a sample is targeted, the more fine detail is created by the beam.
The basic structure of beam and image are set at the outset and longer exposures initiate an energy cascade leading to accumulation at smaller scales. This creates the fractal structures which are misinterpreted as fine-grained biological artefacts.
Exosomes
The pattern here is slightly different but still fractal in nature, with distinct circular structures surrounded by a ring of smaller circles. However, if the method of photography is the same in each case then the electron beam will be adopting a similar fractal structure long before it touches the tissue sample.
Cell division (mitosis) is a complex affair, with accounts consisting largely of observations, with little to no attempt to describe a mechanism in terms of the basic laws of physics. This is no surprise, since the laws of conventional physics are incomplete and insufficient to describe biological systems.
However, the post: The nature of the bio-field describes a nested toroidal biofield for the organisation and regulation of living organisms which sheds light on many hitherto unexplained phenomena.
This post looks at a description of mitosis from researchers who are clearly unaware of such a hypothesis but whose account nevertheless lends considerable support to it. Many observed phenomena can now be described in terms of basic physical laws as opposed to biological teleology. Parallels exist with morphogenesis in other areas of biology and with the phenomenon of ‘gravitational lensing’ in the cosmos.
In the following diagram (left), microtubules are shown ‘growing’ out from the centrosome constructs, somehow circumnavigating the nucleus and moving towards the opposite pole of the cell.
When it is time to divide, two centrosomes become anchored at opposite poles, the cell becomes rounder and firmer in shape and the microtubules provide the tensile forces that enable the splitting of the nucleus and the sharing of the chromosomes.
From the paper: “Growing microtubule plus (+) ends radiate away from the centrosome, establishing an inherent polarity.“
We need to address several questions:
How and why does a cell suddenly become more spherical?
How are the microtubules created?
How is the direction of growth determined?
How is ‘polarity’ created?
What is the overall energy flow?
The overall energy flow
The cell sits within a larger vortex system where energy flows in an overall torus shape with the added tendency to spiral inwards to create an energy cascade from the larger to smaller scale. This means that each individual cell is the recipient of a continuous flow of electric field energy from the outside.
Within a cell, movement of energy is again of a toroidal topology according to the Hairy Ball Theorem of algebraic topology and the Laws of Electromagnetism. Electric fields form a vortex pattern, with the nucleus at the centre of the energy flow. A field gradient is thus formed throughout the cell and the vortex structure provides a natural polarity.
This structure is the source of the overall organisation of both energy and activity within the cell.
How and why does a cell become more spherical?
A cell has a spherical vortex topology but is never an exact physical sphere as it must fit in with the general cellular structure and will necessarily become deformed by the surrounding cells. An increase of energy to the cell or a reconfiguration of the vortex system to send more energy to the periphery will surely result in a firmer and more spherical cell as it reconfigures against the pressure from its neighbours.
The symmetry of the sphere makes it the strongest, most stable structure in existence and provides a resilient framework for the microtubules which will attach to the periphery and provide tensile forces for the eventual splitting of the cell.
How are microtubules created?
Some sort of energy is needed to create structures within the cell. One possibility for the organised transport of energy is the ring vortex, an electromagnetic equivalent of the smoke ring.
Watch the creation of the nervous system of the zebra fish and imagine that ring vortices emanate from a glowing energy source to travel along an already existing nerve fragment to its end. Rings accumulate at the end of the tubule, leading to a volume of increased increased electrical activity, visible as the glowing tip at the end of the nerve, which is used to create or assemble the additional matter required to manufacture the next section of nerve.
Similar processes seem likely for the genesis of microtubules.
The overall energy flow
Energy now flows around the cell in a vortex structure and spirals inwards to the centrioles. These structures act as vortex field accumulators and become the main energy source for the microtubules. This energy flow is used to create the filaments in the first instance and as a conduit once they are completed.
Energy spirals inwards towards the centrioles where it accumulates and then flows outwards along the microtubules towards the cell periphery to complete a continuous flow. If no ‘work’ were done with the energy we would have a miniature perpetual motion machine. However, as it is, energy is transduced to get things done whilst being replenished from the free energy from the cellular matrix as a whole.
The cell forms its own battery system according to the centripetal vortex principle and forms differences of electrical potential according to the overall electromagnetic topology.
What determines the direction of growth of the microtubules?
On the left, microtubules grow out of the centrosome and follow a curved path, first towards the nucleus and then skirting around it before moving away from it somewhat.
How do the tubules know where they are going? What determines their path?
The microtubules have no sensory apparatus and cannot detect objects at a distance and so they must be acting strictly according to local forces which necessarily arise from the field gradients of the vortex system.
The field moves in a vortex and intensifies towards the vortex ‘radius’ (here the nucleus) in an approximate inverse square law and it is this field gradient that guides the ring vortex which generates the tubule itself.
A ring vortex is a field vortex and has a finite size, meaning that as it moves past the nucleus, it is subject to a field gradient, with the stronger field nearer the nucleus. Such a field leads to a physical contraction of the ring close to the nucleus and a concomitant reduction of speed which in turn causes a slight deviation of direction towards the nucleus, thereby causing the curved paths we see.
The mechanism is the same as that which is responsible for the ‘gravitational lensing’ of photons around a massive object in space. (Konstantin Meyl: Scalar Waves..). Classical photons are massless and have no dimensions which means they should be unaffected by gravity or any electromagnetic field gradient. Photons according to Meyl, however, are finite sized ring vortices and will behave according to the mechanism described above.
“That which is above is like to that which is below, and that which is below is like to that which is above.” – Hermes Trismegistus
The laws of nature are the same at all scales of reality.
How is ‘polarity’ created?
Contemporary biology takes its cues from contemporary physics and assumes ‘charge’ as the originator of electric fields. What this means is that in order to have a measurable field gradient, there must be separation of charge everywhere in biological systems. This begs the question: “How did the charge get separated?”.
The point being that the only way to move a charge around effectively is via an electric field, but the field is assumed to come from the charge itself! It appears that we are therefore stuck with the idea of self-organising charges which move against the electric field gradient that they themselves are somehow creating! This is just not credible.
The solution to the creation of field gradients lies in the electromagnetic field structures that result from the laws of vortex physics. A vortex is created, accumulates energy and the result is a spinning electric field and a concomitant magnetic dipole. The whole arrangement has a clear potential difference between vortex radius (cell nucleus) and cell periphery. This is somehow measured and interpreted as a ‘charge’ distribution. This may well be the case, but the driving force comes from the vortex principle.
The human biofield is therefore the progenitor of all electromagnetic phenomena in the body and all charge movement is the result of field gradients and polarity, as opposed to the cause of it.
Mitosis
Microtubules grow from the centrosome out to the cell boundaries and inwards towards the chromosomes and anchor themselves at these extremities in preparation for mitosis. Field currents (ring vortices) continue to flow through the microtubules and this supplies the energy necessary for contraction and eventual splitting of the nucleus.
Once the cell is in two, twin vortices are formed and continue to form according to vortex forces, becoming separate, stable and spherical of themselves. New field gradients form according to basic physical principles and the two nuclei reform, with existing organelles settling into place within the newly created vortex structures.
Due to its polarized nature, the interphase microtubule array created by the centrosome provides a roadmap for the directional movement of microtubule-based motor proteins throughout the cell. Kinesin motor proteins that transport organelles and vesicles are plus (+) end directed and processive, meaning they are able to make long excursions along microtubules and towards the cell periphery (anterograde transport).
Conversely, cytoplasmic dynein motors move toward the minus (−) ends of microtubules and drive the retrograde transport of subcellular cargoes . Since microtubule minus-ends are anchored at the centrosome, which is typically tethered to the nucleus, cytoplasmic dynein moves cargo towards the nucleus..
Within the framework of vortex physics, the microtubules are no longer ‘statically’ charged, but filled with a ‘living’ energy in the form of ring vortices. These consist of moving electric field structures within, and in the immediate vicinity of, the microtubules and whose energy can be harnessed by an appropriately structured molecule to effect some sort of transport mechanism.
When good centrosomes go bad
A cell with an extra centrosome will not work: “divergence from normal centrosome number and structure, is a common pathognomonic cellular state tightly associated with cancers and other genetic diseases”
Because a centrosome is an energy vortex and an extra centrosome will be accumulating energy that could have been used by the rest of the cell. Moreover, a vortex surrounds itself with a field gradient which potentially affects the field structure of the entire cell. Now since all activity and movement within the cell depends upon the existence of a very specific electromagnetic field structure with specific characteristics, there should be no surprise if things don’t work out so well.
Summary
Everywhere in biology we see the footprints of electromagnetic vortex systems at work in the creation of order and the management of energy.
The forms observed are reflections of an underlying fractal vortex field. Energy flow is centripetal and in the form of toroidal movement and vortex cascades. The movement of organelles is via the field gradients created by such flows and teleological descriptions of such movements can, in many cases, be replaced by explanations in terms of the fundamental laws of electromagnetism.
The morphology and behaviour of both exosomes and anthrobots is explained largely in terms of fundamental vortex physics. There is little need to involve teleological biological processes and it is debatable whether these entities have any biological significance.
The biological artefacts below are claimed by some to represent the Sars CoV-2 ‘virus’ and by others to represent ‘exosomes’, small packages deliberately created by the cell in order to recycle and transport resources from one place to another.
Very likely neither is the case and they are merely agglomerations of biological material released from dying cells and held together by electromagnetic forces.
We can ask how this phenomenon arises from the basic laws of physics and come to the conclusions that very little biology is involved, that the image need not represent any phenomenon occurring in vivo or even in vitro and that it may be purely an artefact of the electron microscopy process.
You can see where the cellular debris is just aggregating around the Lipid coating of the vesicle. The cellular debris on the left is just incorporated into a lipid globule on the right and mislabelled as a “Sars Cov 2” this will become very apparent with our latest research. – Jamie Andrews: https://x.com/JamieAA_Again
Material from an deteriorating cell wall appears have clustered around a vesicle somehow. Some sort of organised movement seems to have taken place and granules of something or other seem to move away from the cell and towards a smaller entity.
Why?
What are the physical forces involved?
Where does the energy come from to effect this movement?
What is the organisational principle by which this happens?
What is it that guides the granules?
Why do the vesicles appear to grow just as the cell is dying?
If a general principle of an electromagnetic bio-field based upon a vortex structure is accepted, then there is no mystery here as everything is explained by the natural actions of such a field and that fact that the tissue has been removed from the host field.
The tissue itself retains some of its own energy and field structure but without a supervening field to supply it with morphogenic instructions and a continuing energy supply, it cannot maintain its own integrity and will inevitably deteriorate.
Firstly, note that the granules are clustered around the vesicle that is nearest to the cell wall and form a pattern that differ from the rest of the cell boundary. It seems that they have moved there from somewhere else (the cell itself) and that this movement is somehow goal oriented.
Much movement of particles within cells is said to be ‘Brownian’ in nature, that is to say, driven by a process of random vibration of molecules. The result though is obviously inconsistent with a purely diffusive mechanism and so something else is at play, some goal directed process that has the power to organise inert matter into ordered vesicles.
Fundamental forces
If the granules have been moved, then there must have been some sort of force that moved them. Physicists recognise four fundamental forces in nature:
Weak nuclear force
Strong nuclear force
Gravity
Electromagnetic forces
The forces we seek exists outside of the atom, which rules out the first two and I don’t think anyone believes that gravitational forces are nuanced enough to organise biological material. This therefore leaves electromagnetism as the motivational force behind this phenomenon.
It may be claimed that the granules are moved by ‘kinetic’ forces i.e. by the flow of water, but kinetic forces ultimately arise from the electromagnetic repulsive force. However, this is a simplified version of an electromagnetic field force and still leaves unanswered the question of what it is that organises the flow of water in such a situation.
The behaviour of electromagnetic fields is so similar to the behaviour of water that they will surely amount to the same thing at the scale of a water molecule.
Energy supply
Some sort of energy supply is needed. Many sources will point at Brownian, Gibbs or ‘free’ energy as candidates, but again this energy is diffusive and random, it can’t be pointed in a particular direction for example. It really isn’t possible to guide these energies in a such way as to have it appear in exactly the right place at exactly the right time and in the appropriate quantities.
The energy required then will come from a fractal vortex system supplied by the host organism at first and by the tissue sample and microscopy environment later on.
Organisation
The granules appear to have moved from a decaying cell membrane towards a developing exosome. How do they know to do this? How do they know in which direction to move? They are not living things and have no sensory apparatus. They cannot ‘see’ where they are going and have no way of knowing that an exosome is developing nearby.
If this is true and there is no knowledge of the future or of some situation at some distance away then the only conclusion is that they are moving according to local forces only and with the help of local energy only.
All the answers
A fractal electromagnetic vortex structure provides all the answers in a very natural way whilst adhering closely to the most fundamental laws of physics.
The human body forms the outer periphery of the vortex system and directs energy down the vortex cascade all the way to a single cell, which becomes a consumer of this continuous supply. A cell forms a vortex of itself and its outer ‘membrane’ is maintained by the concentrated energy that accumulates here.
When the tissue is separated from the main body, it suffers a drop in energy supply but can live on for a while on energy accumulated from the local environment. A vortex develops in a water droplet on a microscope slide maybe and the heat from a laboratory is organised into a vortex structure and moves inwards towards a cellular vortex.
Eventually the cell starts to deteriorate and the main vortex structure, decimated by the lack of energy, starts to re-fractalise into smaller vortices which accumulate matter to become the vesicles we see. Each exosome is now at the centre of its own vortex and commences to suck energy from the environment in the same manner as a tornado accumulates both matter and energy from apparently still air.
The cellular vortex itself weakens, and the granules move along centripetal field gradient created by the exosomes. The vortex principle is sufficient by itself to create forces, energy and directional movement; no other organisational or energetic influence is required.
he smaller vortices of the exosomes are now in the ascendant and will accumulate any energy released by the dying cell. Cell death is now synonymous with energy depletion and vortex decay. Biological entities use the vortex as a template for morphogenesis (The morphogenesis of capillaries) so the process of constructing an exosome is very similar to the way a cell was constructed in the first place; the same laws of physics apply but, being divorced from the main bio-field, there is no eventual teleological end to the formation of exosomes. The look like biological entities with a useful function but are probably just accretions of cellular debris.
The exosome is now at the centre of its own energy vortex and, typical with such systems, subsidiary vortices will form around the periphery comparable to tornadoes forming around a large anticyclone in hurricane season. These are interpreted as some sort of protein and indeed they may be as this is how proteins are constructed with a ring-like vortex accumulating and arranging matter: The nature of the bio-field. Any sort of attempts to ‘isolate’ such a protein will likely fail as it is a product of a unique vortex structure and will disintegrate as soon as such an environment changes in any way.
Electron microscopy
Electron microscopy is described as using an energetic stream of electrons which is focused by an electromagnetic lens in a way similar to a visible light lens. The electrons are depicted as travelling in straight lines towards the sample, through the sample (without modifying it significantly), and out the other side where they accurately register the forms of the physical matter that was on the slide in the first place.
This all seems highly unlikely and if true, needs considerable proof of itself.
An electron has an electric charge and a moving electron constitutes an electric current. Such a current necessarily generates a magnetic field at right angles to the current and other charged particles moving within this field will have their path deflected by this field. The total sum of all these movements has been observed in plasma physics as constituting Birkeland current, manifesting as a stable filament of counter-rotating and co-axial electromagnetic fields. See below.
Such constructs are visible at larger scales as tornado formations in the atmosphere..
.. and when focused on some sort of receptive plate in a laboratory, will produce interesting fractal torus patterns which look very similar to biological artefacts:
Note that these images were not produced by passing an electron beam through a biological sample or through anything at all. All the shapes arise naturally from the interaction of the electron stream with the plate in question and very likely are a reflection of the organisation that was present in the stream even before it hit the plate.
Harold Hillman observed similar structures in electron micrographs of biological samples:
So an energetic vortex stream of charged particles is passed through a sample of biological material consisting of charge-structured molecules and possibly metallic dyes, whilst the biological sample itself relies upon an electromagnetic vortex field to construct and maintain organelles, proteins, vesicles and even entire cells.
In a living organism, the shapes of the vesicles are determined by the bio-field of the organism which is already breaking down once the tissue is removed from the body. Now a powerful stream of electrons, consisting of precisely those type of forces that are used for cellular construction, is fired at the sample. A new type of vesicle is created which bears no relation to anything that a living organism would need to construct.
The construction of the exosome is not a ‘biological’ process with a teleological purpose. There is no biological ‘meaning’ in the artefact and it has no lengthy development process; it is simply the result of electromagnetic forces acting upon cellular debris. The forces are strong and such an artefact can be assembled in the fraction of a second.
I remember someone (Stefan Lanka?) writing that there is no cell membrane as such but that one appears in an instant when tissue is removed from its environment; an apple is broken or cut and some sort of double layer immediately forms, giving the impression of a cellular membrane. The short duration of the creation here lending to support that only physical (electromagnetic) forces are involved as opposed to a lengthy biological development process.
Repair via vortex template
A short video from Michael Levin shows a single celled ‘anthrobot’ repairing a mechanically induced wound.
One very obvious interpretation of this now is that although the physical matter has been misshapen, an electromagnetic vortex persists in a necessarily circular shape and acts as a template for regeneration. Field movement of the vortex simply drags the organic matter back into a circular or spherical shape.
In another video, these ‘artificial life forms’ are apparently seen swimming around all by themselves, but another interpretation is possible. Several interesting features stand out as indicative of a driving vortex structure:
The bots are said to move by use of cilia but some do not have cilia
Motion is chaotic with rapid and apparent random changes in both speed and direction
Movement is not always independent of each other with several bots seeming to stick together even through the random motion
Multiple bots rotate either on the spot or whilst moving
Bots rotate around a centre outside of the bot
Orbital debris moves around and with the bot
Motion can be rapid even with a low Reynolds number
Hypothesis: All morphogenesis, movement and other activity arises from the action of an electromagnetic bio-field and all suppositions to the contrary are merely an illusion.
The bots look like they are in a water droplet i.e. a circular or hemispherical container. This is the ideal shape for the formation of electromagnetic vortices. Imagine the bots are sitting in such a field of swirling electromagnetic currents similar to the water eddies in the bend of a river. The bots now ‘look like’ they are in such an environment.
Each bot is formed and maintained by its own vortex and moves around via field-interaction with its environment. Several bots can become trapped in an enclosing vortex structure and move around together. Sudden changes in velocity can be explained by sudden changes in a field state; this is behaviour typical of electric fields but not so much in a living organism trying to swim through a medium the consistency of warm tar (low Reynolds flow).
Rotating bots are likely driven by an external vortex flow. The cilia may help with this but are not strictly necessary and are not obviously causal. Bots are seen to be rotating about an vortex external to themselves; they rotate at the centre of their own vortex but at the same time caught in the current of another.
Bots accumulate energy from the environment and form a peripheral membrane as with the exosomes. Surplus energy here is discharged in corona-like filaments which form a morphogenic template for the cilia. The cilia continue to discharge once formed and vibrate as they do so, thereby creating the illusion of ‘swimming’.
The external vortex stream funnels debris towards the vortex centre and this is can be seen in orbit around the anthrobot.
Energy consumption s efficient within such a system, with everything moving ‘with the flow’, creating very little friction and dissipating very little waste. Eventually, however, everything slows down, movement ceases and there is little energy to hold together the vortex of the bot itself, which proceeds to disintegrate.
Disintegration alone is insufficient to stop the bots moving as they are driven by forces and energy flows outside of themselves which persist for some time. If movement came from the organised activity of the bots, it would surely subside as the bots disintegrated, but if the bots and their movement originate from the vortex field itself, we would expect that it is the field (and hence bot-movement) which lasts longer than the integrity of the bot.
The energy source
The preparation supplies some initial energy and the tissue that the bots were created from no doubt has energy within its own bio-field which is transferred to the experimental set-up. After this, one possibility is that the vortex field is fuelled by the heat and light from the laboratory itself.
Heat is described as dissipative, entropic, thermodynamic and random, but once within a vortex structure can be absorbed into the general vortex flow to extend the life of the whole bot community. Partial proof of this are the experiments performed by Gerald Pollack and others, which show that an influx of infrared radiation can be organised to push micro-spheres through a tube.
Another possibility is that vortex discharge direct from the ionosphere is transduced directly by the vortex field within the water droplet. The following diagram shows the electromagnetic field at the surface of the sun. It has already organised itself into a cellular pattern albeit on a rather grand scale.
Now if a similar arrangement is present at the surface of the Earth then this could conceivably be used to fuel the anthrobot community.
Similar comments apply to all tissue and bacterial cultures and could explain seasonal variations of behaviour and cytopathic effects as described by Kaznacheev for example: Mirror Cytopathic Effect
The helix as a fundamental in biology and physics
Anyone not believing that vortex structures can arise naturally without supplementary information just needs to look at the many helical patterns in nature, from ‘God’s DNA’ appearing in cloud structures to entire nebula organised as a double helix. This is surely a fundamental structure of nature and arises from primordial electromagnetic fields.
The appearance of vortex structures and specifically double helix patterns in biological systems should, by now, come as no surprise. Stefan Lanka has asserted that DNA comes ‘out of the nothing’, and once again, an initially outrageous sounding statement turns out to have a good scientific basis in electromagnetic forces and plenty pf precedent in parallel structures throughout the cosmos.
Relevance to ‘virology’
The artefacts shown are said to be a pathogenic virus that is the cause of disease in human beings and, moreover, that it is characterised by a specific genome sequence which has been identified and documented.
However, it is obvious now that:
There is no guarantee that the images depicted are valid representations of what happens in biological systems
There is no guarantee that they are even representative of what happens in a tissue culture
The formation of such artefacts is almost entirely by the basic laws of physics and needs very little input, if any, from a biological system
Helical structures can seemingly form anywhere at any time in the cosmos
Nobody can link a specific genomic sequence as belonging to one of these artefacts
There is no ‘model’ for explaining how a particular genome sequence causes specific symptoms
Correlations between these artefacts and either season or latitude may be explained by the direct influence of the Earth’s magnetic field on either the tissue sample itself or the electron stream that ‘illuminates’ it
These artefacts very likely have no biological significance whatsoever
Mainstream wisdom is that antibiotics are one of the greatest discoveries in medicine and have saved millions of lives. However, studies proving this are hard to find, leading to many now taking the position that bacterial infections simply do not exist and that bacteria will only consume tissue that is already dead.
His short summary: “Can’t find randomized controlled trials (RCTs) showing a curative effect, only reduction of symptoms and AEs...”
Four papers are listed. One comment on the thread suggests that antibiotics work in the specific case of cystitis.
This page presents a hypothesis explaining these results based upon the bio-field model outlined here: The nature of the bio-field. A theoretical argument is given as to why it may be impossible for bacteria to destroy healthy cells and a definition of ‘healthy’ is proposed. An argument is given to explain many cases of spontaneous remission as a bio-field state change in response to the administration of a variety of toxins including antibiotics.
The Cochrane papers
Antibiotics for acute bronchitis (review) – Smith, Fahey..
Immediate versus delayed versus no antibiotics for respiratory infections (Review) – Spurling et al.
Antibiotic therapy for the treatment of infective endocarditis – Martí-Carvajal
The review of the Cochrane papers:
Antibiotics for upper respiratory tract infections: an overview of Cochrane reviews – B. Arrol https://t.co/nG98xVBKGn
The four Cochrane reviews reveal a limited role for antibiotics in the four upper respiratory tract infections discussed above. Other than for radiologically proven acute maxillary sinusitis, the reviewers do not recommend antibiotics as a first choice treatment for any of the four conditions.
Guidelines tend to reflect the level and type of evidence that a group will accept, and this may explain why the guidelines from the Infectious Disease Society of America (2002) recommend routine treatment for otitis media, whereas the Cochrane reviewers are more cautious.
There has long been a consensus about the lack of need for antibiotics and, even with the new information on acute purulent rhinitis (along with a common cold), the reviewers remain consistent with the older guidelines in not recommending antibiotics as a first-line treatment.
From the paper:
Routine antibiotics for the four aspects of acute upper respiratory tract infection are not mandated for non-severe cases.
Antibiotics for acute otitis media are effective but the NNTB is about 15.
Delayed prescriptions for antibiotics for acute otitis media may be a good compromise for not giving immediate antibiotics.
Antibiotics may not be necessary for sore throat in areas where rheumatic fever and glomerulonephritis are uncommon.
Antibiotics are not effective for the common cold.
Antibiotics may be effective for acute purulent rhinitis associated with the common cold, but their routine use is (not?) mandated as most patients will get better without antibiotics.
“Antibiotics are superior to placebo regarding both clinical and microbiological success in adult non-pregnant women with microbiologically confirmed acute uncomplicated cystitis. However, they are associated with more adverse events.“
So it appears that antibiotics are effective against cystitis. However, we still don’t know if the mechanism is that which is claimed by the mainstream narrative. Do they really work by killing off the bad bacteria or is there some other process at work?
The central thesis of this website is that biological systems are regulated by an electro-magnetic bio-field that is open to disturbances in the Earth’s magnetic field and uses various cosmic rhythms (diurnal, solar and lunar cycles) as pace-setters (Brown). Disruption of such a system is synonymous with what we call disease, symptoms arise from dis-regulation and will often coincide with measurable atmospheric events as in the case of influenza.
Cystitis is related to menstrual cycles
A search using an AI engine easily provides a nice summary with references:
Fluctuations in hormone levels, especially oestrogen, during the menstrual cycle can affect bladder sensitivity and potentially trigger or worsen cystitis symptoms.
Oestrogen plays a role in maintaining the bladder lining and vaginal flora. Lower oestrogen levels, particularly during menstruation, can make the bladder more susceptible to irritation and infection.
While hormonal changes are a key factor, other elements like stress, sexual activity, and hygiene practices can also contribute to cystitis flare-ups during the menstrual cycle.
Changes in hormone levels are indicative of changes in regulation and regulation is managed by the bio-field.
Low oestrogen may make the bladder more susceptible to inflammation – but why? and what causes low oestrogen in the first place?
Cystitis is linked to changes in the weather
Again, from an AI engine:
Yes, some studies suggest a link between atmospheric conditions and cystitis. Specifically, warmer weather has been associated with an increased risk of UTIs, including cystitis. Conversely, colder weather can make the body more vulnerable to infections, potentially increasing the risk of cystitis.
Some individuals report that changes in air pressure (like drops during storms) can trigger flares of interstitial cystitis, a condition that can cause symptoms similar to cystitis.
While not definitively proven, some studies suggest a potential link between humidity and UTI risk.
Some studies have observed a seasonal pattern in UTI incidence, with higher rates in warmer months.
Many individuals with IC/BPS experience a worsening of symptoms during specific times of the year, often coinciding with seasonal allergies or changes in weather.
So cystitis is related to warm weather, cold weather, too humid, too dry, wrong season etc. As with influenza, the common factor here is very likely that which is described by the very last phrase above: “changes in the weather”.
Changes in the weather
The weather changes when a pressure front passes overhead and this triggers all sorts of interesting conditions but almost all entail some sort of electrical disturbance and it is these effects that are responsible for disturbing the regulatory system and thereby causing disease.
Pressure fronts are accompanied by turbulence in the atmosphere which causes the movement of charged particles in the air which in turn leads to the propagation of electromagnetic impulses (sferics). Sferics can propagate with little attenuation and can be detected thousands of kilometres from their source (Wikipedia) . It is these electromagnetic energies that alter the normal functioning of the indigenous bio-field to cause a variety of supposed ‘infections’ and other disease types.
Cells and vortex energy: a hypothesis
The page The nature of the bio-field describes a biological organism as being regulated by an electromagnetic bio-field that consist of a series of nested vortices. The physical body is created by such a field and as such reflects the overall fractal toroidal topology.
The body is both formed and maintained by this energy structure and any depletion or dis-regulation will lead to problems.
A single cell consists of a single energy torus with the nucleus at the vortex ‘radius’. The energy structure of a cell is that of a spherical vortex with negative electrical field spinning around the periphery. This spinning field gives rise to a magnetic dipole and it is these magnetic forces that hold the cells together, whilst the electrical ‘zeta potential’ keeps them spaced apart and prevents them collapsing.
Bacteria have the same form and they must contend with strong electromagnetic forces in order to move about and reproduce. The negative electric potential of healthy cells makes it impossible for a bacterium to approach a cell sufficiently closely for it to consume it.
If a cell is deprived of energy for long enough then the vortex structure will weaken and so will the zeta potential. The cell is not holding together so well and is not adhering to its neighbours; the tissue is weakening. After a while the repulsive forces are so weak that bacterium are able to approach the cell and dismantle the whole structure.
This, then, gives a theoretical basis to the idea that bacteria cannot infect healthy tissue and this seems to be confirmed by the data above.
Membranes are sensitive
A membrane represents a sharp discontinuity of electrical potential and appears to be a sensitive spot in biological systems, with problems commonly occurring at the skin, interior of arteries, bladder, lungs and upper respiratory tract. Energy in the body is delivered centripetally via a vortex system and there is no way that a membrane is going to be at the centre of a sizeable vortex. We must therefore expect that certain areas of the body are more prone to tissue degeneration and hence opportune ‘infections’ from bacteria.
Cystitis as an energy deficit
Cystitis is correlated with menstruation and so we can hypothesise that a significant amount of energy may be diverted to the uterus for repair purposes and that there is therefore something of a deficit created in neighbouring organs, including the bladder.
An energy deficit means less vortex energy for the cells, weakening of tissue and an increased likelihood of bacterial proliferation.
Cystitis is also associated with changes in the weather. These affect bio-regulation and the menstrual cycle seems particularly sensitive to such disturbances. A disturbed regulatory system could conceivably redirect energy away from the bladder without triggering menstruation and again, the result is an ‘infection’.
The apparent success of antibiotics
The body is regulated by a bio-field and this has the nature of a ‘control system‘. Such systems can undergo rapid state changes sometimes; we may suddenly fall asleep or wake up, suddenly feel sad or experience a sudden temperature flush.
These changes may occur for no discernible reason or may happen as the result of some sort of shock whether it be psychological, physical or pharmacological. We should therefore consider the possibility that similar shocks may promote a sudden shift in regulation that actually restores correct functioning, i.e. a ‘miracle’ cure.
Miracle cures
Sudden and unexpected cures abound. There are many well documented cases at holy springs such as Lourdes and many modern day cases arising from the administration of toxins such as hydroxychloroquine, ivermectin or large doses of vitamins C or D.
There was lots of anecdotal evidence for instant and otherwise inexplicable cures resulting from visits to Lourdes and consumption of, or immersion in, the local spring water. A church commission was set up to assess the miraculous nature of the phenomena and, more interestingly, a medical board was organised to evaluate their scientific validity.
The paper concludes that:
Many instantaneous remissions occur
Many of these are permanent cures
Most patients are women
The diseases most commonly cured were tuberculosis and some form of depression
An attitude of prayer (a receptive state) is thought to be important
The rate of cures has dropped off as the incidence of tuberculosis has waned
Patients report an instantaneous sense of relief, a return of energy, optimism and appetite. People can go from years of being barely alive to completely revitalised. Sometimes the cure is complete in a few seconds and sometimes full recovery takes a few weeks after the initial impulse.
Antibiotic success as a miracle cure
The proposal then, is that cures from antibiotics or other poisons may be via a similar process. A toxin is administered, the regulatory system experiences a ‘shock’ and initiates some emergency protocol that results in a redistribution of energy within the body.
Miracle cures depend upon the interposition of a ‘cognitive’ bio-field between the actual cause the perceived effect and as a consequence will be effectively impenetrable to organised study. We cannot measure the bio-field and cannot therefore predict outcomes with any degree of certainty.
Anecdotal evidence vs control trials
The experiences of any individual are therefore unique to the individual and may not be in any way representative of population outcomes, they may not accurately delineate a relationship between cause and effect.
Population outcomes themselves are clearly dependent upon season and specific weather conditions but these are rarely controlled for, taken into account or even mentioned. The possible interaction between the Earth’s magnetic field and the intervention in question is never explored.
Experiments on tissue cultures
The claim here is that a cell depends upon a regulated supply of a particular type of energy (free energy, Gibbs energy, vortex energy) for its health and specific biological instructions for its function. In experiments performed outside of a complete organism the cell is missing both of these elements. It therefore does not qualify as a ‘living cell’ and the relevance of such experiments is literally meaningless unless there is some specific argument to the contrary.
Cells that lack sufficient energy are more likely to break down and are more likely to succumb to bacterial proliferation.
Bacterial cultures themselves are susceptible to external electromagnetic influence and demonstrate variability according to solar and lunar cycles, e.g. Mirror Cytopathic Effect
Ambient EMF will vary by significant factors from one end of a laboratory to another.
Summary
There is no good evidence that antibiotics cure bacterial infection in the way claimed
There is still no evidence that bacteria can infect healthy tissue and we now have a plausible theoretical argument that supports the idea that such a thing is impossible
The idea that depleted energy is the cause of cellular degeneration can be used as an objective definition of what constitutes ‘healthy’ tissue
Apparent cures of many diseases may be ‘miracle’ cures caused by a state-change of a bio-field
Weather changes are important influences in the cause of many diseases and in the absence of true infection of either bacteria or viruses, the sole cause of many diseases may be electromagnetic disturbances of a bio-field resulting from such changes
Control studies may be very misleading if they are not cognisant of the above points
The existing explanation of the creation of blood vessels is really just a list of observations with no described mechanism or regard to the laws of physics. Concerns of energy conservation are ignored and no fundamental forces are mentioned. New cells appear out of nowhere in precisely the right place with no regard for how this might be organised.
The whole procedure can be viewed as the natural outcome of the activity of electromagnetic vortices.
Below we see the development of a capillary (right) from a mass of hemangioblasts (undifferentiated blood cells).
From left to right:
The phrase ‘undifferentiated mesoderm’ suggests a lack of anything interesting
Hemangioblasts appear as if from nowhere without a described mechanism
The cells cluster together even though they each have a negative zeta-potential
Endothelial cells surround the blood island, again for no apparent reason
Cells again merge together without explanation to form strong capillary tubes
Several questions arise:
How is all this organised?
What forces are involved?
Where does the energy come from to create the new tissue?
Where does all the extra ‘matter’ come from and how does it get there?
How do cells bind together?
Vortex physics
The whole of a biological system is organised by an all pervasive energy field in the form of a fractal vortex structure. Energy spirals inwards and outwards, forming a series of nested toroidal structures similar to the shape of a completed red blood cell.
Each smaller vortex can capture energy from the general vortex field and act as an accumulator and transducer. An energy cascade is formed, guiding the free energy towards the centre of smaller and smaller vortices nested within the system.
The energy concentration at the centre of the smaller vortices is sufficient to allow for the transmutation of elements and possibly even for the de novo creation of matter itself.
Blood islands – ScienceDirect
From left to right
The mesoderm looks random but in fact acts as an energy accumulator, drawing energy from the surroundings and organising it into a series of spherical vortex structures which act as morphological templates for the hemangioblasts.
Energy continues to accumulate and more hemangioblasts are formed. Each cell retains an energy vortex, having a surrounding negative electric field and concomitant magnetic dipole. The dipole draws the cells together via magnetic forces and the electric field keeps them separate, thereby allowing for the self-organisation into clusters that we see in the illustration.
The blood island, comprised of many smaller cells now forms a de facto vortex structure of itself and continues to acquire free energy. Energy cascades inwards and where it meets the accumulated energy of the h-blasts, a vortex ‘radius’ forms which is defined by a sharp concentration of field energy.
Vortex boundaries typically fractalise further to form smaller vortices at the periphery and these smaller vortices in turn form both the energy supply and morphological template for the developing endothelial cells. These, when complete, form attractive forces between each other and cluster together to form the capillary wall.
The wider context
All this action takes place at the periphery of the yolk sac, which is itself a vortex structure absorbing energy from the surrounding field.
Energy tends to concentrate at both the outer limits of a vortex (from external sources), and at the centre (from an internal cascade), and it is at these spheres of influence that membranes tend to form. The two membranes of the endoderm and ectoderm here serve to destabilise the outer vortex and cause it to fractalise into the smaller vortices of the hemangioblasts.
Th whole of morphogenesis can be seen as a series of fractal vortex structures providing both energy and morphological templates to fuel and organise the entire process.
Recent observations have noted that initial yolk sac vessels are distant from the majority of yolk sac blood. It has been proposed that endothelial cells migrate proximally from the distal portion of the embryo and encapsulate the extraembryonic blood
Blood island observations contributed to the classical view that blood cells always originate intravascularly. This idea, however, has been challenged by a number of more recent studies in the mouse. For instance, isolation of cells from gastrulating embryos showed the presence of blood cell precursors within the primitive streak, long before morphological evidence of blood island formation.
The vortex model strongly supports the idea that development is in the following order:
Accumulation of energy
Construction of blood cell precursors
Blood island formation by mutual attraction
Energy accumulation at blood island periphery
Construction of endothelial cells
Construction of artefacts is always on-site so that no transportation or migration is required and indeed, such operations would merely add to the number of phenomena that need an explanation. If the endothelial cells are manufactured elsewhere, we still need to say how they were manufactured and in addition now need to say how they migrated.
Describing cells as ‘migrating’ rather suggests that they move of their own energy and ‘volition’, that they somehow know where they are going, how to get there and when to stop moving.
Unlikely.
Cells are manufactured where they are because that is where the energy is found to effect the manufacture. They will remain at that place as this preserves the vortex structure and ensures a continuous supply of energy for maintenance, function and repair.
Vascular formation
So we have some blood islands surrounded by endothelial cells but this does not constitute a capillary; the islands are disconnected and do not yet form a tube.
Following their formation, individual blood islands extend towards each other and undergo anastomosis (fusion and connection), forming a continuous primitive plexus of vascular tubes – ScienceDirect
The individual cells in a blood island each form a magnetic dipole and are necessarily all aligned in the same direction for the purposes of cohesion, with the consequence that each blood island now forms a de facto bar magnet.
The magnets align north-south within the membranes and start to pull towards each other. A stable tubular field filament is thus formed and further vortex energy flows towards the tube, enabling further development of tissue and the completion of a capillary tube.
In all cases, a field structure is manifest as a precursor to the physical organ. This field both supplies the organ with energy and acts as a morphological template.
Blood flow
A capillary tube is completed and now is interpolated between the blood island and its energy supply. This supply is now reduced in intensity and modulated from a plain vortex structure to something more complex as determined by the electromagnetic structure of the endothelial cells.
This is a sign for the blood cells to complete their differentiation thereby consuming more energy and allowing the magnetic bonds to weaken. The island breaks up and the blood cells become individual entities now capable of ‘flow’ as an electrodynamic fluid.
The blood is said to start flowing before the heart is complete and certainly before it starts pumping. Such movement requires an energy supply and we can now start to guess where this comes from. The cells no doubt have some residual energy remaining from when they were first formed, but to form a continual flow they will need a refuelling at some point.
The overall vortex flow is still in place and so we can assume that some energy still flows inwards towards the capillary, is modified by the electromagnetic properties of the tissue itself and then continues to flow inwards into the capillary where it is requisitioned by the plasma and blood cells to somehow effect linear movement along the vessel. See: Blood flow and scalar waves
Transportation or transmutation
The question remains then of how these energy fields manage to organise or acquire the necessary physical matter to manufacture a cell:
How is it that the base elements are available in precisely the right proportions and volume to make a new cell?
If the process runs out of carbon, say, where does the extra carbon come from?
What is the process by which a cell signals for more carbon?
How is it transported?
How does the cell recognise a carbon atom?
How does it move it around and how does it know where to go?
Are the cells manufactured elsewhere and transported to the right place?
How does this happen?
The simplest answer to these are the most unlikely sounding from the perspective of conventional science, which is presumably why they are never considered.
Transmutation
Louis Kervran documented many cases of elemental transmutation, specifically claiming that oxygen could be transmuted into carbon.
We have then the possibility that water (H20) could be broken down into oxygen and hydrogen, that the oxygen is transmuted into carbon and that this carbon is then used to construct bio-molecules. Similar considerations apply to nitrogen.
This is a convenient solution as water is ubiquitous in biological systems. Any depletion of water molecules is easily remedied by the simple mechanism of diffusion. There is no need for the specialised transport of specific molecules or elements – the whole of the construction is from local materials and available energy.
A supervening bio-field
The electrodynamic forces that are described above as assembling the cellular structures are rather strong, short range and not particularly ‘intelligent’. They arise from the laws of physics and emerge from very basic vortex structures.
There is no sense in the laws of physics of any sort of organisational principle or the sort of feedback system required to achieve a stable end state. How does the system decide precisely where and when the blood islands are to be constructed and how does it decide when to stop?
Some higher level control system is clearly required. Such a system cannot override the local laws of physics and therefore must work in conjunction with them.
The page: The nature of the bio-field posits a supervening electromagnetic bio-field that works by subtle influence upon the emergent electromagnetic fields which arise from cellular collectives. The above observations help to reinforce this idea.
The hypothesis of ‘assembly by vortex structure’ seems natural and always locally in accordance with the laws of physics, but incomplete as regards overall organisation. Some other influence is required to provide a ‘subtle’ guide for the whole process and whose presence is inferred rather than directly observed.
This post suggests an overall toroidal topology for the universe and tries to introduce a most important idea which is that the physical measure of distance is dependent upon (gravitational) field strength (Boscovich, Meyl) and is therefore a function of whereabouts the measurement is made in the universe.
The overall topology considered is that of a torus (below) and the behaviour is that of a continually flowing electromagnetic field as described by Konstantin Meyl. The flow is according to the laws of electromagnetism and itself takes the form of a dynamic vortex structure.
Electromagnetic field movement is continuous and therefore takes on the form of a torus as being the only structure capable of sustaining such a flow. Any other attempt at a continuous flowing vector field ends up having a discontinuity somewhere; see the Hairy Ball Theorem of topology: Wikipedia
The universe consists solely of a flowing electromagnetic field which determines the topology and since the field naturally forms a torus, the depiction of the universe as a torus is justified from this consideration alone.
There is only the Field
There is a strong temptation to imagine an electromagnetic field taking on a toroidal shape embedded within a Cartesian grid system which determines distance and angles, but the task here is to consider that it is the field itself that determines both topology and metric.
The field is not embedded in anything at all; there is no distance metric as separate from that which is physically measured, there is no such thing as ‘space’ that is separate from the field and no such thing as ’empty’ space.
All that is measurable is an electromagnetic field and anything that is not part of such a field is not part of the measurable universe and therefore cannot be said to ‘exist’ in any meaningful way.
The idea of a separate ‘ideal’ universe with nice tidy geometry is just a fantasy.
The electromagnetic field is the entirety of the universe and takes on a toroidal form and therefore the universe is toroidal in overall topology, i.e. the shape of the universe is determined by its contents and is not independent of them.
Physical ‘distance’
If we are not embedded in a Cartesian grid system then how is distance defined?
There is surely only one option; we define it from the physical matter of the universe as this is all that is available to us.
Construct some sort of yardstick and declare it to be one Cosmic Unit (CU) long. Imagine it to be the width of one of the grid squares in the above image and try to think what happens as it is moved around the universe.
Field strength is inversely proportional to the size of the square, with smaller squares having greater field strength. Length is determined by field strength with a stronger field compressing distance accordingly and as a natural consequence, ‘shrinking’ the yardstick to maintain proportion with the grid squares.
As our measurement instrument is moved towards the centre of the torus, the atoms are compressed and the stick physically shrinks, whereas if it moves outwards towards the periphery, then it and all the surrounding physical matter will expand.
To reiterate: all we have as a measurement tool is our physical-matter yardstick. This is our fundamental reference and any idea that it is somehow measuring something else more absolute called ‘length’ is just a hallucination.
All we have available in physics is our observations of physical events, and any relation to an underlying geometric model is mere inference. The theory of physics should start with observations and not with an assumed Cartesian framework with an already existing metric and 3-d geometry.
Is the universe expanding?
Now we are an observer within a toroidal universe and are looking around trying to make sense of things. Some parts of the universe seem to be expanding relative to us and even moving away from us whilst others appear to be contracting or spiralling inwards.
However, if we move to the periphery, where things seem to be expanding, then we will ourselves, expand with the toroidal geometry and find that our home planet is now shrinking relative to us even though we thought it to be constant in size when we were living there.
Moreover, in our expanded state, we find ourselves spiralling inwards much to our surprise and realise that the apparent expansion of the universe as seen from Earth was merely an illusion owing to the fact that our Earthly system is now seemingly shrinking and moving away from the outer reaches of space. This made it seem to us at the time that the universe was actually expanding away from us.
Parts of the universe are therefore expanding relative to us and others are shrinking, but the inhabitants of those parts are unaware of this and presumably imagine themselves to be somewhere near the centre and in an absolute frame of reference.
So is the universe expanding?
Relative to what? There is no absolute measure of distance apart from a yardstick which adapts its size to local conditions and within the universe itself, no perceivable ‘edge’ or defined outer boundary and so the question really doesn’t make much sense.
But .. geometry?
In the diagram below, the triangle on the left has equal sides and equal angles. The sides are each 3 Cosmic Units long as measured with one CU rulers (shown).
The triangle on the right has its base in an area of increased (gravitational) field strength (maybe from a local sun) and so the metre rulers have shrunk. This means that it still measures 3x3x3 CU, but the angles have changed.
Local distances are determined by field strength which leads to a modified geometry. So geometry itself is determined by field conditions and is no longer ‘absolute’.
This is important when calculating the distance to other stars and galaxies. Cosmologists assume some an invariant Euclidean geometry to the universe but this doesn’t hold here and the stars may be much ‘nearer’ than we think.
As a spaceship exits our solar system, the field strength diminishes and the ship expands accordingly to a great degree. Vast ‘distances’ are covered in a very small time.
Platonic forms
Anyone wants to argue that metre rulers are always a metre long needs to think how to prove this. What is your control? What do you measure a metre against if not some other local physical object or waveform?
The whole idea of an abstract and invariant metric is unprovable. Distances are determined by the size of physical objects and these vary according to field strength .
Field strength varies slightly everywhere and as a consequence there is no such thing as a perfect circle or square anywhere in the universe, no such thing as a Platonic form in actual reality or even the expression of such.
Constants such as Pi exist only in an imaginary realm of perfect geometry.
Physical (real) geometry is determined by the laws of field physics and if something looks a bit like a cube it is because of the local laws of physics and not because of the laws of mathematics. The (approximate) cube is a perfect expression of the field equations and not an imperfect expression of a Platonic form.
Black holes
Take a look again at the overall topology and consider that within this structure lie smaller more local structures which are interpreted as stars, galaxies and black holes.
Now depending where you are on the torus you may see half the galaxy heading towards the central singularity and infer a great force emanating from the core and sucking everything in, or you may see a great outrush of matter pouring from an assumed ‘white hole’.
None of these assumptions are any good here and all that is happening is that matter is moving in an inevitable path as determined by the dynamic topology. Matter does not ‘collapse’ in a black hole but merely shrinks accordingly and will expand again when out comes out the other side.
Gravity
Such behaviour near a planet or star gives rise to the phenomenon known as gravity, which again is assumed to somehow ’emanate’ from the star and suck things towards it. Nobody has seen gravity emanate however and so it is permissible to think of it as an inward spiralling of the field geometry.
This isn’t too outrageous a statement and is comparable to Einstein’s bendy space idea except here we have no need for a separation between space and matter and all is a pleasing unity.
Einstein’s spacetime
“Spacetime tells matter how to move; matter tells spacetime how to curve.” – John Archibald Wheeler
Here we have a superfluity of ‘stuffs’ that is common in mainstream physics. How do spacetime and matter communicate with each other in such a fashion and where are the laws governing such an interaction? How is it proved that spacetime and matter are really separate entities? What are the innate properties of ‘spacetime’ that allow it to be manipulated in such a way and how does it ‘move’ matter?
More pertinently we can ask: “What does it mean that space is ‘curved’ and with respect to what exactly?” The whole idea of ‘curvature’ seems to assume the existence of some sort of Cartesian reference grid as separate from the curved space.
Progress has not been made and all that has happened is that the conceptually difficult part has been moved from one place to another in the hope that nobody will notice.
If matter and geometry are so closely linked, we can consider that they are really both manifestations of some other underlying phenomenon and that such a phenomenon is now seen to be an electromagnetic vortex field.
Gravitational lensing
The phenomenon of light bending its way past a massive body now needs almost no explanation.
The gravitational field of a star is no longer to be regarded as a force or even a distortion in spacetime but simply the centre of a field vortex.
The field strength closer to a star will be greater than the strength slightly further away and so lengths ‘increase’ further away from the star. A photon is a finite size in vortex physics and so contracts nearer the star and expands further away. Translation: it follows a curved path.
Space is not bent as there is no such thing as space to be bent, only a toroidal field creating a toroidal geometry.
Earth-sun orbital anomaly
The Earth is said to orbit the sun but the position of the sun is not fixed, being displaced by a distance of over a million kilometres by the gravitational fields of the Earth and other planets. Despite this, the gravitational pull on the Earth from the sun seems to be always towards the sun at the present and never where it was a few minutes ago. (Van Flandern)
Some have interpreted this as the gravitational field from the sun travelling at many time the speed of light in order to reach the Earth in time but nobody has seen a gravitational field ‘travel’ or ‘radiate’ from the sun and in any case better explanations are now available.
There is no unlimited gravity that emanates from the sun but instead the sun and planets move in a coordinated fashion according to an ever changing vortex geometry and as such it cannot be considered that the sun is ‘causal’ in moving the planets or that Saturn is ‘causal’ in moving the sun.
The sun is positioned at the centre of the most powerful vortex and it is this vortex that has the most influence on the solar system as a whole, thereby creating many correlations between the movements of the planets and the position of the sun. However, this in no way implies that the sun itself is the origin of such movements.
The whole arrangement moves as a whole and according to the laws of vortex physics. The sun is moved by the vortex as is Saturn and the Earth itself and any perceived influence of one body directly upon another is merely an illusion.
In addition to this mechanism, we now should concede that photons are travelling from the sun within a geometric vortex and will move accordingly. The idea that light always travels in a straight line through space is now meaningless as there is no such thing as Euclidean space and therefore no such thing as a straight line.
Instead, we have photons moving through a vortex system and whatever location of origin, will impact the Earth in a direction determined by the vagaries of their whole path taken from the sun through the intervening vortex field.
If you want to try and guess their origin from the direction they approach Earth, then .. “Good luck!”. This is like trying to locate the source of a river by standing at the estuary.
The mechanism
How does all ‘matter’ shrink in a strong gravitational field?
In Meyl’s vortex physics, all matter is made from an agglomeration of electrons and an electron is just a stable field vortex with electrical spin and a magnetic dipole. Put such a thing inside a magneto-gravitational field and the radius of the spin reduces so the radius of the electron reduces and all matter then shrinks.
Evidence?
Tamarack mines experiment A long piece of wire was dropped down a mine shaft and it was found to be shrunk by a significant amount, the implication being that it is the increase in strength of the magnetic component of the Earth’s gravitational field that is responsible.
Hafele – Keating experiment Clocks in aeroplanes run at different rates depending upon whether they are travelling East to West or West to East.
The origins
This scheme makes the idea of a Big Bang radiating all the energy and matter in the universe both unlikely and unnecessary.
We don’t know how things ‘started’ or even if there was a ‘start’, but if the general movement is from periphery to centre, opposite to conventional thinking, then it would make sense to think about the origins in a similar manner.
Field ‘energy’ originates as a vortex somehow and immediately starts to spiral inwards. The energy density increases and smaller vortices arise near the centre which will form smaller and smaller vortices in a fractal pattern.
These smaller vortices form galaxies, stars and single atoms that order, with the smaller structures arising from the larger and not the other way around.
The smallest vortices stabilise around the size of an electron and matter has materialised from a pure electromagnetic field. The creation of matter continues throughout the lifespan of the universe an there is no upper limit on the total mass.
It may seem that the universe needs to be exceedingly large at the outset in order to contain enough energy to materialise such matter and that the sheer volume required is enough to counter the argument. This is not the case, however, as there is no objective ‘size’ to the universe at all and all subsequent ‘expansion’ can as easily thought of as being inward expansion as outward.
There is no real expansion, creation or loss, but instead an increasing complexity of vortex structure arising from the inward concentration of field movement.
A Theory of Objectivity
How on earth do we do any science when distances keep changing and we have not even a consistent way of measuring the passage of time?
Meyl has the answer which he calls his Theory of Objectivity. A transformation is made from local coordinates to global, calculation is made in this new objective framework and the answer is transformed back into local coordinates.
There are no fundamental constants and hence no fine tuning of the universe is necessary. Most fundamental constants come from the need to translate between the different ‘stuffs’ and energies of contemporary physics. Once these are reduced to a single set of equations, the problem disappears.
This never happened
The elementary particles
In the chart below, Konstantin Meyl shows the measured masses (relative to the mass of the electron) of the elementary particles and compares them with the values he has calculated from his own field equations.
The correlation is striking and cannot be coincidence.
“Scalar Waves: a first Tesla physics textbook for engineers” – Konstantin Meyl
The periodic table
In this next chart, again from Meyl, the measured radii of the elements from the periodic table are compared with values calculated from the more fundamental field equations. No other informational input is necessary.
The values show precise correspondence at the start of each new electron shell and drift apart slightly as the complexity of calculation necessitates simplification by series truncation.
“Scalar Waves: a first Tesla physics textbook for engineers” – Konstantin Meyl
Avogadro’s number
Avogadro’s Law: “Equal volumes of all gases, at the same temperature and pressure, have the same number of molecules.” – Wikipedia
Avogadro constant: “The Avogadro number is an exact number equal to the number of constituent particles in one ‘mole’ of any substance” – Wikipedia
Simplification: “The same number of molecules take up the same amount of space” (Each molecule is the same size?)
Fixed by decree: “In its 26th Conference, the BIPM adopted a different approach: effective 20 May 2019, it defined the Avogadro constant NA as the exact value 6.02214076×1023 mol−1” – Wikipedia
There is no sensible explanation for this within mainstream physics. The value of the constant cannot be calculated directly from any fundamental theory of gases and so they just decree that the number itself is a fundamental constant of physics, thereby discouraging any attempts to investigate the matter, removing the need for any proposed mechanism and obviating the need for any more measurements of the value as it is already established as a fixed element of the system!
An explanation of the constant and a derivation from Meyl
The obvious inference from measurements is that the molecules are spaced out evenly throughout the volume, being surrounded by some ‘sphere of influence’ which keeps them apart and which provides resistance to compression via repulsive forces. These forces nevertheless allow the molecules to move around (diffusion and flow) with a little resistance (friction).
The only forces worth considering here are electromagnetic in nature and so we need some sort of field structure that creates such a sphere around an atomic nucleus. The field will be some arrangement of electrically negative vortices which are attracted to the nucleus but repel other such structures.
A credible description of the gaseous state of matter
The extra energy in the gas state has caused the eight electrons of the n=2 shell of the Oxygen atom to come out of their usual concentric orbitals to form an eight-fold ring around the outside of the nucleus. The reduced field strength at this distance from the centre has caused the electrons to expand suddenly to many times their original volume.
The electrons stick together via magnetic dipole forces but repel other negatively charged elements. The electrons rotate of themselves and rotate as a ring and this represents a means of energy storage and energy transfer. A cross-sectional view from the north pole is shown but in reality the whole shape is that of a peeled orange with an overall spherical shape comprised of eight segments which are the electrons.
Whatever the original size of the molecule, the volume is now dominated by the size of the expanded electron shell and this is the same for each atom at least. Something similar must be happening with compound molecules.
Gas pressure and Avogadro laws are now explained along with the critical (as opposed to continuous) change from liquid to gas.
Gravitational constant
“About a dozen measurements of Newton’s gravitational constant, G, since 1962 have yielded values that differ by far more than their reported random plus systematic errors. We find that these values for G are oscillatory in nature, with a period of P = 5.899 +/- 0.062 year, an amplitude of (1.619 +/- 0.103) x 10^{-14} m^3 kg^{-1} s^{-2}, and mean-value crossings in 1994 and 1997.” – Anderson et. al.
So not only do measurements vary but they vary with a certain pattern which actually correlates with the varying rotational speed of the Earth:
“Of other recently reported results, to the best of our knowledge, the only measurement with the same period and phase is the Length of Day ” – ibid
Most sources will say that there is and can be no variation at all in the gravitational constant simply because it is declared as a fundamental constant of nature. Any apparent discrepancies in the value must therefore be caused by problems with the measurement method:
“However, we do not suggest that G is actually varying by this much, this quickly, but instead that something in the measurement process varies” – ibid
One possibility mentioned by Anderson et. al. is that the whole process is some how affected by the Earth’s magnetic field:
“Least unlikely, perhaps, are currents in the Earth’s fluid core that change both its moment of inertia (affecting LOD) and the circumstances in which the Earth-based experiments measure G. In this case, there might be correlations with terrestrial magnetic field measurements.” – ibid
Variations in measurements of the gravitational constant – Speake, Quinn
Gravity as an emergent effect of magnetic dipoles
Many scientists including Konstantin Meyl and adherents of the Electric Universe Model have suggested that gravity is really just an average of the electromagnetic fields arising from the constituent atoms of matter.
The field arises from the sum of the magnetic fields of a random assortment of atoms and will consequently become much stronger if the atoms are aligned and regularly spaced such as in a bar magnet.
Meyl gives arguments for the masses of the elementary particles (see above) and calibrates them with respect to the mass of an electron, obtaining very good agreement with experimental results.
So gravity is not fundamental but arising from magnetic fields, with the cumulative effect in macro sized lumps of matter dependent upon the precise arrangement of atoms and possibly the presence of other electromagnetic fields.
The mass of an electron according to Meyl is not fundamental but depends upon the speed of light.
How is the gravitational constant measured?
Good question. The papers cited above merely say that the constant has been ‘measured’ by several different teams. This gives the impression that you can buy a device to wave in the air and get a reading in both metric and imperial units if you are lucky.
This is not the case and what is measured is rotating balls or falling weights, with the gravitational constant somehow inferred from such measurements.
The only physical measurement we ever see in real life is the displacement of a visual marker on some instrument or other, whether it be the hands on a clock or glowing digits on an electronic device. Everything else is an artefact of the model.
To say that the gravitational constant is ‘measured’ is highly misleading; it is interpreted from measurements and according to a theoretical framework. Now if your theoretical framework has this value defined as ‘constant’ and it turns out to be variable then you are already in a bit of a mess.
What is ‘mass’?
There is no consistent definition of ‘mass’. It is held to be fundamental (of course!) and is described as an ‘innate’ property of matter, but the only existing definitions are contradictory and circular.
“Mass is an intrinsic property of a body. It was traditionally believed to be related to the quantity of matter in a body, until the discovery of the atom and particle physics. It was found that different atoms and different elementary particles, theoretically with the same amount of matter, have nonetheless different masses.” – Wikipedia
Oops! Mass is not related to the quantity of matter!
If mass is not related to the quantity of matter and we have no other definition apart from a collection of purported measurement techniques, then how can it be ‘intrinsic’?
“Mass in modern physics has multiple definitions which are conceptually distinct, but physically equivalent. Mass can be experimentally defined as a measure of the body’s inertia, meaning the resistance to acceleration (change of velocity) when a net force is applied. The object’s mass also determines the strength of its gravitational attraction to other bodies.” – ibid
But it is already established that the strength of gravitational attraction is dependent upon the gravitational constant, not just the mass.
We find that mass is defined by various measurement techniques:
Resistance to acceleration (inertia)
Strength of gravitational attraction to other bodies
Power to attract other bodies by its own gravity
These are emphatically not physically equivalent unless shown to be so by experiment and theory. Just saying it is so does not make it so.
Note that all these definitions are by measurement of something other than mass itself. The mass, which is presumed fundamental and declared ‘intrinsic’, is actually a theoretically inferred value from other (measurable) quantities.
Moreover, the strength of gravitational attraction (mass) depends upon the gravitational constant and this has been shown to vary, or at least has not been shown to be constant.
In addition to this we find that calculations of the gravitational constant itself all depend upon knowing the precise values of the masses involved. Therefore: Gravity depends upon mass and mass is defined with respect to gravity.
This is circular self-referential nonsense!
Inertia as mass
The addition of inertia as a definition of mass does not help. This just adds an extra quantity that needs defining, measuring and somehow integrating into an already shaky framework.
How can this be achieved if inertia is absolute but other forms of mass vary? What is the theoretical mechanism that describes how the inertial mass is the same as the gravitational? In what sense then are they ‘equivalent’?
Inertial mass is measured by the force needed to produce an acceleration on an object. It therefore needs an acceleration in order to be manifest and yet at the same time is said to be an ‘innate property of matter‘.
How is this conclusion reached if the mass is never measured with respect to a body at uniform speed? How do we know that the mass of such an object persists at the measured value and what does this even mean?
An analogy with dynamic friction
If this seems like sophistry, first consider the phenomenon of ‘friction’. We have a good analogy as nobody knows how it works and the property of dynamic friction is only measured in moving objects. The frictional properties of stationary objects are different to that of those in motion and both are dependent upon the interaction between the objects.
Nobody thinks that friction is an innate property of any material but varies with speed and depends upon the relationship between the two surfaces. Dynamic friction is only present when motion is involved and disappears when motion ceases. Nobody asks “Where has it gone?” because it is not assumed to be an immutable property of matter.
Lenz’s law
A magnet dropped down a copper pipe will travel much slower than if the pipe were not there according to Lenz’s law.
What has happened to all the mass? If mass is intrinsic then there is some other (magnetic) force acting upon the magnet to oppose the motion. No magnetic field was present in the copper pipe before the motion started and the field of the magnet is not sufficient by itself to produce the slowing down. The force did not exist prior to the experiment and disappeared after it ended. The new property was actually created by the experiment itself.
Again, nobody would think that this retarding force is an intrinsic property of matter, so how can they be so certain as to claim that ‘mass’ is such a property?
If, as suggested above, the gravitational force arises from the electromagnetic field interaction between the field of an object and the field of the Earth then the above considerations are pertinent. The current formulation of the mass of an object as only dependent upon the object itself, however, effectively rules out any investigation of such phenomena.
An empirical definition?
“Mass can be experimentally defined as a measure of the body’s inertia, meaning the resistance to acceleration (change of velocity) when a net force is applied. ” – Wikipedia
This is as confused as a definition can be.
If all that is measured is a resistance to acceleration then that is all that may be deduced. An ‘intrinsic property’ may not be inferred and there should be no automatic conclusion of a similar effect in different gravitational fields.
“The object’s mass (i.e. resistance to acceleration) also determines the strength of its gravitational attraction to other bodies.” How does this work exactly? How can this be deduced? Do we assume that a material with a high frictional coefficient also has the power to attract other objects? No, of course not.
We have several different measurement techniques measuring several different quantities and the claim is that they are all ultimately measuring the same thing, that they are ‘physically equivalent‘. But how can this be justified?
A measurement is just a measurement and a concept just a concept. The concept of mass is just a concept as it can never be measured directly It can be deduced only by the application of external forces and the measurement of movement followed by an interpretation made according to a specific theoretical model.
So two different results are obtained from two different measurement techniques, interpreted according to two conceptually different theoretical frameworks and are then declared to be “physically equivalent“! No. Theoretically equivalent, maybe, but ‘physically‘? No, the phrase has no meaning.
If inertia is simply owing to the quantity of matter present then it cannot possibly be related to mass, according the initial quote from Wikipedia!
Time
There appears to be no consistent definition of time as an independent physical variable.
The rate of a swinging pendulum depends upon gravity and so will change with variations in the gravitational constant and will vary according to its location on the Earth’s surface.
The rate of atomic clocks varies even with two clocks in the same building. They will run at different speeds during an eclipse and even differ according to their alignment with respect to the Earth’s magnetic field. See diagram below.
Meyl: Scalar waves..
Distance
In the Tamarack mine experiment a long piece of wire was lowered into a mine shaft and found to have shrunk considerably. See: Tamarack mines experiment The explanation from Meyl is that a horizontal component of the Earth’s magnetic field increases towards the centre of the Earth and this is responsible for shortening the wire.
The Hafele – Keating experiment showed the opposite effect when distances were measured in a plane flying at altitude; distance was stretched out instead of shrunk.
A simple measure of distance is therefore subject to interpretation and such interpretation will vary according to the model involved.
Attempts to measure distance by wavelengths of light are subject to Doppler shift and again are not direct measurements at all but interpretations filtered through some theoretical framework.
π
Surely the ratio of a circle’s radius to its circumference is a fixed and fundamental constant of the universe?
Alas, no. Pi is a constant in Euclidean geometry but the experiments above suggest that the physical world does not follow the rules.
In the field theory of Meyl, physical length is determined by field strength and so the apparent geometry of the real world is also a reflection of field strength and this is unlikely to give rise to a Euclidean geometry.
There is no proof that the physical world is super-imposed upon a Cartesian grid; all we have are some sort of physical measurements and the Mine experiment shows that our measuring tools do not follow the rules of traditional geometry if gravitational fields are involved.
If we take a long piece of string out into space and measure the radius as we go, we expect to find that the circumference of a circle orbiting the Earth is 2πr but both Meyl’s theory and the Hafele-Keating experiment suggest otherwise.
Geometry is therefore a function of field strength and this will vary continuously through space. The existence of a perfect circle or square in physical space is highly unlikely but the space in which we live is approximately Euclidean over small distances and so nobody has noticed.
Dark Matter
The invention of Dark Matter and Dark Energy with no direct evidence whatsoever of their existence is surely one of the greatest embarrassments of modern science. They have assumed this ‘stuff’ to comprise over 95% of the known universe simply because they have an incorrect model of gravity.
Konstantin Meyl proposes that in addition to gravity there is the possibility of resonant neutrino attraction between individual galaxies and stars to help resolve the matter.
We can note here that if you have no stable concept of time nor distance and have declared gravitational forces to be constant when they are measurably variable and unrelated to the amount of matter, then you are already in Big Trouble.
The speed of light
The speed of light is declared to be a fundamental constant within the framework of Einstein’s relativity. What this means is that whatever speed you manage to measure for light it must necessarily come to the same value. If it appears to be a different value then it is something else that has varied.
‘Speed’ is calculated as distance per unit time but as explained above, there is no consistent definition of either distance or time and so if the speed of light is different from its decreed value then scientists are free now to blame variations in either time or distance according to their whim.
In Rupert Sheldrake’s TED talk: “The science delusion”, he mentions that that the speed of light slowed down by about 20 km/s between 1928 and 1945 before resuming its approved value. The response of the standards authorities was to simply re-define the length of the metre in terms of the speed of light so as to correct for the difference, thereby confirming that distance is no longer a fundamental quantity of physics.
The units of the gravitational constant
The gravitational constant is equal to approximately 6.67×10−11 metres cubed per kilogram per second squared i.e. 6.67×10−11 m3⋅kg−1⋅s−2
We will merely note here that not one of metres, kilograms or seconds has a stable definition and yet they are all assumed to combine together to give a constant value!
The vortex physics of Konstantin Meyl
The vortex physics of Konstantin Meyl contains a single vector differential equation with one ‘constant’ only which he calls ‘c’, by analogy with the speed of light, and which in his framework is the speed of field propagation. There are no other variables within the system with which to compare this value and so ‘c’ may be set to unity without any loss of information.
The whole of physics is described via a single equation which means there are no separate ‘stuffs’ needing adaptation or calibration to one to another and hence no fundamental constants are needed.
Moreover, since there is only one equation, there is not only no need for translation from one set of units to another, but no possibility of any extra units arising and so never any need for constants, ever; there is simply no place for them in the theoretical framework.
Fine tuning?
Nope. There are no constants and therefore nothing to fine tune.
The fine tuning argument has been used to advocate for intelligent design on the grounds that the precise values of the constants we see cannot have arisen by accident whilst atheists prefer to think that the constants are different in an infinite number of different universes, with only the single universe that we inhabit being lucky enough to have the right values.
We now see that the idea of fine-tuned constants arises from an inadequate model of physics and that all those fascinating debates are just a waste of time. Either side could have paused to think that contemporary physics is incomplete and that this is what necessitates the introduction of all these new constants.
References
“Scalar Waves: a first Tesla physics textbook for engineers” – Konstantin Meyl
This post is an AI generated summary of the book ‘Neutrino Power’ from Konstantin Meyl and Johannes von Buttlar.
The document discusses a conversation between Johannes von Buttlar and Prof. Dr. Konstantin Meyl about the experimental evidence of room energy and neutrinos, exploring new physical theories and their implications for understanding the universe.
Discussion on Free Energy Concepts
The conversation between Johannes von Buttlar and Prof. Dr. Konstantin Meyl explores the concept of “free energy,” its implications, and the potential for new energy sources derived from neutrinos and scalar waves. They analyze the limitations of current energy technologies and the need for innovative approaches to meet future energy demands.
The term “free energy” is discussed, emphasizing that all energy sources are ultimately free but come with costs related to extraction and distribution.
Concerns about environmental impacts and the sustainability of current energy sources are highlighted.
Alternative energy solutions, such as wind and solar power, are critiqued for their limitations in reliability and energy output.
Prof. Meyl asserts that a new form of energy exists, which he refers to as “free energy,” and he believes it can be harnessed effectively.
Neutrinos and Scalar Waves
The dialogue delves into the properties of neutrinos and scalar waves, suggesting that these phenomena could provide a new understanding of energy transmission and interaction.
Neutrinos are described as subatomic particles that may have mass and charge, challenging existing scientific assumptions.
Scalar waves are introduced as a form of energy transmission that operates differently from traditional electromagnetic waves.
Prof. Meyl presents experimental evidence suggesting that scalar waves can transmit energy without the losses associated with conventional methods.
The potential for harnessing these energies for practical applications is emphasized, with claims of achieving efficiencies exceeding 500%.
Tesla’s Contributions to Energy Science
The discussion acknowledges Nikola Tesla’s pioneering work in energy transmission and his theories regarding scalar waves, which have largely been overlooked in modern physics.
Tesla is credited with discovering the principles of scalar waves and their potential applications in energy transmission.
His experiments demonstrated the ability to transmit energy wirelessly, which is now being revisited in light of new scientific understanding.
The conversation suggests that Tesla’s insights could lead to breakthroughs in energy technology if properly recognized and developed.
Experimental Evidence and Practical Applications
Prof. Meyl shares details about his experiments that demonstrate the principles of scalar wave energy transmission, providing a basis for further exploration in this field.
The experimental setup involves a wireless energy transfer system using resonant coils, which successfully transmits energy between sender and receiver.
Measurements indicate that the system can achieve efficiencies of over 1000%, challenging conventional energy transfer models.
The experiments are designed to be reproducible, allowing others to verify the findings and explore the technology further.
Implications for Future Energy Solutions
The conversation concludes with reflections on the potential impact of these discoveries on future energy systems and the need for a paradigm shift in energy technology.
The authors argue for a reevaluation of current energy practices in favor of more sustainable and efficient methods based on scalar wave technology.
They envision a future where energy can be harnessed more effectively, reducing reliance on fossil fuels and minimizing environmental impact.
The discussion emphasizes the importance of interdisciplinary collaboration to advance understanding and application of these concepts in practical energy solutions.
The Coupling of Scalar and Transverse Waves
The text discusses the interrelationship between scalar waves and transverse waves, emphasizing their spontaneous transformation and coupling in various applications. This coupling has practical implications in fields such as telecommunications and electromagnetic compatibility.
Scalar and transverse waves can transform into each other spontaneously.
Both types of waves appear in the same wave equation.
Practical examples include the reception of ground waves and broadcast waves using the same antenna.
Filtering scalar waves can reduce measurable field strength but does not eliminate them entirely.
Implications of Scalar Wave Filtering
The conversation highlights the challenges and potential of filtering scalar waves, particularly in the context of mobile phone usage and electromagnetic shielding. The effectiveness of shielding against scalar waves is questioned.
A Faraday cage can filter out transverse waves, allowing only scalar waves to pass.
Filtering methods may not provide complete protection against electromagnetic pollution.
The coupling of wave types means that reducing one type may also reduce the other.
Health Concerns Related to Mobile Phone Usage
The discussion raises concerns about the health implications of mobile phone radiation, particularly the effects of scalar waves on users. The conversation suggests that current mobile technology may not adequately address these health risks.
Mobile phones emit both transverse and scalar waves, with scalar waves potentially being more harmful.
Users are advised to use external antennas to mitigate exposure.
There are reports of increased learning difficulties in children near mobile phone towers.
The Role of Education in Addressing Wave Issues
K.M. emphasizes the importance of educating students and professionals about scalar waves and their implications for technology and health. This education aims to raise awareness and improve technology design.
K.M. conducts lectures and seminars to inform about electromagnetic compatibility and scalar waves.
There is a need for better understanding among engineers regarding the implications of scalar waves.
K.M. aims to influence technology development to minimize biological risks.
Critique of Current Mobile Technology Development
K.M. criticizes the design of current mobile phones, arguing that engineers lack understanding of scalar waves, leading to potentially harmful designs. The conversation suggests that this oversight could have serious health implications.
Current mobile phones are optimized for scalar waves, which may increase health risks.
The trend of shortening antennas has led to unintended consequences, such as increased scalar wave emissions.
K.M. calls for a reevaluation of mobile technology to address these issues.
Historical Context of Wave Physics
The text provides a historical perspective on the development of wave physics, particularly the decline of vortex physics in favor of Newtonian mechanics. This shift has implications for understanding modern physics.
Vortex physics was historically significant but has been marginalized in favor of Newtonian methods.
The inability to isolate and measure vortices has hindered their acceptance in modern physics.
K.M. advocates for a return to vortex concepts to better understand physical phenomena.
The Need for a New Field Theory
K.M. proposes the development of a new field theory that incorporates both vortex and potential waves, challenging the limitations of Maxwell’s equations. This new theory aims to provide a more comprehensive understanding of electromagnetic phenomena.
K.M. suggests that Maxwell’s theory is incomplete and lacks causal relationships.
The proposed hydromagnetic field theory would replace the need for quantum explanations.
This new theory could unify various physical phenomena, including gravity and chemistry.
Understanding the Nature of Particles
The conversation explores the nature of particles, particularly electrons, and their properties as potential vortices rather than discrete entities. This perspective challenges traditional views in quantum physics.
Electrons are described as dipoles rather than monopoles, with both positive and negative charges.
The spherical shape of particles is attributed to the pressure of the vacuum.
The duality of electric and magnetic fields is emphasized, with implications for understanding particle behavior.
Conclusion on the Future of Physics
The text concludes with a call for a paradigm shift in physics, advocating for a more integrated approach that considers both fields and particles as interconnected phenomena. This shift could lead to new discoveries and advancements in technology.
A new understanding of fields and particles could revolutionize physics.
The integration of vortex and potential theories may lead to breakthroughs in various scientific fields.
K.M. emphasizes the importance of re-evaluating established theories to foster innovation.
The Concept of Antimatter and Particles
The discussion revolves around the existence of antimatter, its relationship with matter, and the implications of particle interactions. The conversation highlights the theoretical framework of particles and their antiparticles, particularly focusing on electrons and positrons.
Two possible vortex directions exist: clockwise or counterclockwise, affecting the sign of field indicators.
An electron, with a negative charge, can transform into a positron, which has a positive charge at its center.
Antimatter is theorized to exist in equal quantities to matter, suggesting the potential for entire solar systems made of antimatter.
When matter and antimatter collide, they annihilate each other, resulting in the release of energy in the form of light.
The photon is described as a pair of oscillating electron-positron particles, exhibiting dual properties of matter and antimatter.
The Nature of Neutrinos and Their Properties
The conversation delves into the characteristics of neutrinos, their interactions, and their role in particle physics. Neutrinos are presented as unique particles with specific properties that differentiate them from other particles.
Neutrinos are considered as oscillating ring vortices, possessing a swinging charge that averages to zero, allowing them to pass through matter undetected.
They interact weakly with matter, causing phenomena like beta decay in neutrons.
The model suggests that neutrinos can be harnessed for technological applications, termed “Neutrinopower.”
Neutrinos have no mass and can travel at speeds exceeding that of light under certain conditions.
The Strong Interaction and Proton Stability
The discussion addresses the strong interaction, its role in atomic nuclei, and the stability of protons. The conversation critiques existing theories and proposes a new model for understanding these phenomena.
The strong interaction, or nuclear force, is responsible for holding atomic nuclei together despite the repulsion between positively charged protons.
Current theories, including the introduction of quarks and gluons, are criticized for lacking empirical support and clarity.
The proposed model suggests that protons consist of an electron and a positron pair, leading to a stable configuration that explains their magnetic moment and charge.
The stability of protons is attributed to the internal structure and the arrangement of their constituent particles.
The Role of Faraday’s Law in Electromagnetic Theory
The conversation highlights Faraday’s law of induction and its implications for understanding electromagnetic fields. The discussion emphasizes the need for a new approach to field theory based on Faraday’s principles.
Faraday’s law describes the relationship between magnetic and electric fields, demonstrating that a moving magnetic field induces an electric field.
The duality of electric and magnetic fields is emphasized, suggesting that both can transform into one another under relative motion.
The discussion proposes a new mathematical framework that incorporates Faraday’s law as a foundational principle for a comprehensive field theory.
The approach aims to reconcile existing theories with empirical observations, moving beyond the limitations of Maxwell’s equations.
The Ether Concept and Its Scientific Relevance
The conversation explores the historical and contemporary significance of the ether concept in physics. The discussion critiques the dismissal of the ether and its implications for understanding light and electromagnetic fields.
The ether is defined as the medium through which light propagates, providing a framework for understanding the speed of light.
Historical experiments, such as the Michelson-Morley experiment, failed to detect an ether wind, leading to the rejection of the ether concept.
The discussion argues for the ether’s relevance, suggesting it as a necessary component for explaining electromagnetic phenomena.
The ether is posited as a field that influences the propagation of light, with implications for understanding the nature of space and time.
The Nature of Light Speed
The discussion revolves around the concept of light speed as a variable rather than a constant, challenging traditional physics. The implications of this perspective suggest a need for new mathematical transformations to describe motion between different inertial systems.
K.M. argues that if light speed is variable, a new coordinate transformation is necessary, incorporating Lorentz transformations as a special case.
J.v.B. highlights the mathematical complexity of the Lorentz transformation, questioning its physical interpretation.
K.M. asserts that Einstein’s assumption of constant light speed introduces paradoxes, which could be avoided with a different approach.
The Role of Fields in Physics
K.M. proposes that physical phenomena, including length contraction and gravitational effects, are influenced by fields rather than just motion. This perspective leads to a new understanding of how fields dictate physical measurements.
The concept of length contraction is tied to the Lorentz transformation, which K.M. connects to field strength.
K.M. emphasizes that the electric and magnetic fields influence the dimensions of objects, leading to observable effects like length contraction.
The relationship between field strength and length is expressed as a simple proportionality, contrasting with complex mathematical expressions.
Objectivity vs. Relativity in Physics
K.M. distinguishes between a subjective observer theory and an objective theory that seeks to understand physical reality beyond observation. This shift in perspective aims to provide a more accurate representation of physical phenomena.
K.M. criticizes the reliance on subjective observations in modern physics, advocating for an objective approach that considers what physically occurs.
The objectivity theory posits that the constancy of light speed is a mere measurement constant, not a fundamental property of nature.
J.v.B. acknowledges the challenges of reconciling subjective observations with objective reality.
Unifying Forces and Interactions
K.M. presents a unified theory of interactions, suggesting that all forces, including gravity and electromagnetism, can be explained through field interactions. This approach offers a new framework for understanding fundamental forces.
The theory posits that the perceived gravitational attraction between particles arises from their field interactions rather than a traditional force.
K.M. explains that electromagnetic interactions result from the behavior of open field lines, while closed field lines correspond to neutral particles.
The model suggests that gravitational effects are a consequence of the geometry of space influenced by these fields.
Implications for Energy and Matter
The discussion touches on the potential for energy generation from fields and the nature of matter at a fundamental level. K.M. suggests that understanding these principles could lead to new energy solutions.
K.M. theorizes that energy is a state description of electromagnetism, and the conservation of energy is a derived principle from field interactions.
The possibility of generating energy from the vacuum or neutrinos is mentioned, although practical applications remain theoretical.
The transformation approach allows for the derivation of physical laws, such as the conservation of energy, from the field theory perspective.
Railgun and Neutrinopower Concepts
The discussion revolves around the Railgun as a practical example of Neutrinopower, highlighting its unexpected energy output and the implications of such technology. The conversation emphasizes the potential for free energy generation and the challenges associated with harnessing it effectively.
The Railgun, known for its high energy output, reportedly produced 399 GJ from an input of only 16.7 MJ, indicating an Over-Unity effect of 24,000.
Engineers involved in the SDI project faced significant challenges, including structural failures during tests.
The Railgun operates using high voltage and rapid changes in current, similar to natural phenomena like lightning.
The concept of Neutrinopower suggests that Neutrinos can be materialized and harnessed for energy, drawing parallels to natural energy conversion processes.
Challenges in Harnessing Free Energy
The conversation highlights the difficulties inventors face when attempting to create stable free energy devices, particularly regarding control mechanisms and energy regulation.
Continuous operation of free energy devices can lead to catastrophic failures if not properly regulated.
Many inventors fail to consider the necessary control systems, leading to instability and potential destruction of their devices.
Historical examples, including Tesla’s experiences, illustrate the risks associated with free energy experimentation.
Neutrinos and Biological Implications
The dialogue explores the biological effects of Neutrinos on human cells and their potential link to aging and diseases like cancer.
Increased exposure to Neutrinos may damage mitochondria, leading to energy deficiencies in cells and potentially accelerating aging.
The discussion suggests that excessive Neutrino exposure could contribute to rapid cell division, possibly resulting in cancer.
The concept of spontaneous human combustion is linked to Neutrino accumulation, indicating a need for further research into its biological effects.
Tesla’s Innovations and Theoretical Applications
The conversation delves into Nikola Tesla’s contributions to energy transmission and his visionary ideas regarding wireless energy transfer.
Tesla’s work on the single-wire transmission system demonstrated a theoretical efficiency of 100% and eliminated energy losses.
His experiments with high-voltage systems and flat coils led to significant advancements in energy transmission technology.
Tesla’s vision for wireless energy distribution was ahead of its time, facing resistance from investors concerned about unregulated energy distribution.
Future of Energy Technologies
The discussion concludes with reflections on the future of energy technologies, particularly the potential of Neutrinopower and Tesla’s theories.
Neutrino-based energy systems are seen as a decentralized and highly efficient alternative to traditional energy sources.
Tesla’s single-wire and wireless energy transmission concepts remain relevant, with potential applications in modern energy systems.
The conversation emphasizes the need for further exploration and development of these innovative energy solutions to address current energy challenges.
Neutrinopower and Its Applications
Neutrinopower is a revolutionary concept that utilizes neutrinos for energy generation, challenging traditional electrical engineering principles. The discussion highlights the potential of new technologies and materials needed to harness this energy effectively.
Neutrinos can be attracted and their density increased through resonant interactions.
New components are required for isolator technology, replacing conventional electrical components.
Neutrinolyse, a process where neutrinos interact with water, can produce hydrogen and oxygen without consuming electrical energy.
Stanley Meyer developed a water-fuel cell that uses water as a fuel source, achieving a fuel consumption of 2.8 liters per 100 kilometers.
The Role of Water in Neutrinopower
Water plays a crucial role in enhancing the effectiveness of neutrino interactions, acting almost like a catalyst.
Water’s high dielectric constant allows for strong interactions with potential vortices.
The dipole nature of water molecules facilitates easy resonance with neutrinos.
Increased water content in batteries enhances their recharging capabilities, with lead-acid batteries being particularly popular.
Historical Context and Technological Challenges
The conversation touches on historical figures and the challenges faced by inventors in the field of free energy and neutrino technology.
Inventors like Walter Schauberger and Stanley Meyer faced significant obstacles, including suppression of their technologies.
The discussion reflects on the potential dangers and risks associated with pioneering new energy technologies.
Historical events, such as the observation of supernovae, are linked to changes in neutrino radiation and its effects on Earth.
Neutrinos and Cosmic Phenomena
Neutrinos are linked to cosmic events, such as supernovae, which can significantly impact the Earth and its environment.
Supernovae release vast amounts of neutrinos, which can affect solar activity and geological events on Earth.
Historical supernovae may have influenced human history and biological development due to changes in radiation levels.
The potential for increased neutrino radiation could lead to geological disturbances, including earthquakes and volcanic eruptions.
Theoretical Implications of Neutrinos
Theoretical discussions suggest that neutrinos could play a role in understanding fundamental forces in the universe, including gravity and electromagnetic interactions.
Neutrinos may provide insights into the structure and behavior of galaxies, challenging existing astrophysical models.
The concept of resonant interactions could explain phenomena that current physics struggles to address.
The discussion proposes that the universe operates on a cycle of energy exchange, with neutrinos being central to this process.
Future of Neutrinopower Technology
The future of energy generation may heavily rely on the utilization of neutrinos, with significant implications for technology and society.
There is optimism that advancements in neutrino technology could lead to cleaner and more efficient energy sources.
The timeline for widespread adoption remains uncertain, influenced by technological developments and societal acceptance.
The potential for a shift in energy paradigms could reshape industries and environmental practices globally.
Literature Cited in the Context
The text provides a comprehensive list of literature related to electromagnetic compatibility and scalar wave technology, primarily authored by K. Meyl and other notable figures. This literature serves as foundational references for understanding the principles discussed in the context.
Key works by K. Meyl include three parts on electromagnetic compatibility, with English translations titled “Scalar Waves.”
Other notable references include works by Nikola Tesla, Johannes von Buttlar, and various scientific publications on electromagnetism and energy.
The literature spans various topics, including free energy, electromagnetic fields, and theoretical physics.
Recommended Literature for Further Study
The text recommends specific books and resources for readers interested in the subject of neutrino power and scalar wave technology. These resources are essential for a deeper understanding of the concepts presented.
The three main books by K. Meyl are essential:
Part 1: Causes, phenomena, and scientific consequences (ISBN 3-9802 542-8-3, 16 EUR).
Part 2: Free energy and neutrino interaction (ISBN 3-9802 542-9-1, 16 EUR).
Part 3: Information technology and scalar waves (ISBN 3-9802 542-7-5, 16 EUR).
Additional documentation and videos are available for purchase, enhancing the learning experience.
Experiments on Scalar Wave Transmission
The text outlines various experiments related to scalar wave transmission, emphasizing their unique properties and potential applications. These experiments challenge conventional physics and demonstrate extraordinary phenomena.
Experiments include wireless energy transmission and feedback from the receiver to the sender.
Claims of free energy generation with approximately 10 times over-unity efficiency are presented.
Scalar wave transmission is suggested to occur at about 1.5 times the speed of light, alongside observations of tunneling effects.
Available Experimentation Sets for Learning
The text describes two types of experimentation sets available for purchase, aimed at different audiences interested in exploring scalar wave technology. These sets facilitate hands-on learning and experimentation.
The Demonstration Set is priced at 800 EUR and is designed for non-experts, allowing five experiments without additional tools.
The Experimentation Set costs 1400 EUR and includes advanced equipment for physicists and engineers, featuring three different coil sets and a frequency counter.
This post consists of a collection of images of vortices occurring in various media with brief explanations as to what may be happening from the point of view of vortex physics. The similarity of structure across different substrates whether fluid, gas, plasma or biological tissue, suggests a common underlying mechanism in the form of rotating electromagnetic field vortices.
The vortex principle
From Konstantin Meyl’s books Potential Vortex vols. 1-4 and Scalar Waves: a first Tesla Physics handbook, we have the following diagram illustrating the basic idea of the Vortex Principle.
The illustration shows a vortex with a spiral flow outside the vortex radius (depicted as a circle) and solid body rotation within the radius. The arrows represent the velocity of the vortex movement whether it be fluid motion or electric field currents.
The magnitude of the velocity drops away as the inverse square of the distance from the radius, reaches a maximum at the radius and attenuates in a linear fashion within the vortex, reaching zero at the actual centre.
This basic idea can be built upon to construct far more complex structures and in many ways can be seen as the basic building block of the entire Universe.
In the case of electric field vortices, the pattern is driven by the electromagnetic field equations given by Meyl, which are a tidied up version of Maxwell’s equations with the added concept of field movement.
The movement is at the speed of light and provides a motivational force at the heart of physical reality. Spiral movement in vortex structures needs no originating impulse as the movement is already there as a fundamental of physics.
A cosmological manifestation: the bar galaxy
A clear case of a basic vortex structure on a grand scale.
The arms of the galaxy spiral inwards according to electromagnetic field forces and the solid body rotation at the centre maintains the bar shape.
The structure seen is not the consequence of the movement of charged particles but rather the cause of such movement. The field self-organises according to the vortex principle and the charges then move in accordance with the Lorenz forces.
A clear example of a vortex radius
This image, taken from a video by the Michael Clarage and the Thunderbolts project again shows spiral movement towards a clear demarcation at a specific radius. This can be interpreted as a vortex radius as above or maybe a ring vortex (see below).
The basic structure here is of an electric field (shown in pink) circulating the torus and a concomitant magnetic field whose field lines (yellow) flow at right angles to the electric.
The two fields are always in movement and so always co-exist. Magnetic and electric fields therefore are inseparable and cannot be thought of as different fields but rather different aspects of the same field.
Plasmoids and smoke rings
Smoke rings and water rings take on the form of a ring vortex with actual matter moving around a torus shape.
In the case of plasmoids created in a laboratory, it is charged particles that are moving within a stable torus – but how?
The easiest solution to think about is that an electric field is organised according to the field equations of Meyl and it is this that drives the movement of charged particles.
Classical physics needs some sort of charge to create an electric field which leaves us tied to the idea of matter as the instigator in all field creation and movement. This means that in the case of electric vortices we must first find a way for the particles to move around in the required pattern in order for them to be responsible for creating the field that will maintain such a pattern.
This may sound reasonable(?!) but there is some circularity of causality here.
Positioning the field itself as the progenitor solves the problem. The field moves of its own impulse and organises itself according to the fundamental field equations of Meyl. Any charged particles within the field will now move according to the usual Lorentz forces.
Causality proceeds from field to particle.
Persistence of electromagnetic vortices
Moving rings in air or water will attenuate but there is no friction in pure field vortices.
Scientists at Imperial College, London claim to have found a magnetic field vortex in a piece of rock that has retained accurate information concerning the Earth’s magnetic field from a billion years ago.
The image below shows a very clear ring vortex with a luminous axis.
Mainstream physics wants us to believe that the white axial jet is produced by a massive black hole at the centre of the galaxy and shoots outwards, away from the galactic centre.
With the hindsight of vortex physics, the immediate impression, given no further information, is that of a field-organised ring vortex that accumulates energy from the surrounding cosmos to instantiate energy and matter as the vortex principle concentrates the field activity in the ring.
Matter is moved around the ring structure by field forces and is not flung out along the axis (see smoke rings) but pushed inwards to stabilise the ring.
An enclosing inward spiralling of field movement concentrates energy at the axis, allowing the creation of smaller energetic vortices in the form of photons and possibly electrons. Photons radiate outwards and a ‘tornado’ of energy becomes visible.
Instead of believing that all the energy comes out of an infinitesimally small point of space for no reason at all, consider that the energy is continually accumulated inwards from the vastness of space according to immutable physical laws that are verifiable in a laboratory.
Proteins
The electrostatic field lines of TRAP, trp RNA binding attenuation protein (PDB ID, 2EXS)
The field lines are described as ‘electrostatic’ but electrostatic field lines are usually radial, so how do they get to look like this?
More likely the electric fields are moving and this gives rise to magnetic fields and consequent torsion forces which form the characteristic vortex pattern.
In a cellular environment, the vortex will be sustained by an accumulation of energy from the surroundings and a magnetic dipole will help align the protein with the ambient field structure.
These seem natural if regarded as field structures as the field laws are (almost) scale invariant. If a ring vortex can form at a large scale then it can form at a smaller scale.
It no doubt takes additional energy to form the inner rings but a general disposition for energy to move inwards ensures that there is always sufficient to create additional vortices.
In ‘turbulent’ flow in fluids, the flow is organised into nested vortices. Energy flows from the larger scale, outer vortices, to the smaller scale inner vortices.
Electromagnetic vortices follow a similar pattern of nested vortices and Meyl’s vortex principle describes the inward spiralling of a single vortex.
Very likely then, the pattern of electric field flow within the human body is as described with these water vortices. An overall toroidal flow breaks down into smaller and smaller vortices down to the scale of the organ, cell, nucleus and eventually electron.
Energy flows from the large scale to the small and the smallest vortices accumulate sufficient energy to transmute elements, break down water into hydrogen and oxygen, or materialise an electron from pure energy.
Look again at the fractal ring vortices above. There is no ‘smallest size’ in vortex physics, no Planck length and no granular structure as with Wolfram’s scheme, so it is theoretically possible for vortices to exist with an arbitrary degree of nesting.
Now if information can somehow be inserted into and retrieved from such an arrangement then there is no theoretical limit the the amount of information stored, thereby making such a scheme an ideal candidate for the inherited substance of genetics.
Living systems are often said to be far from thermodynamic equilibrium, but how they do so has remained a mystery. Fractal vortices surely offer a plausible explanation. Energy spirals inwards and is stored as smaller and smaller vortices. The fractal nature allows for an arbitrarily large amount of energy to be stored and the inward spiralling vortex principle acts against radial dissipation as an effective anti-entropy mechanism.
Helical cloud structures
One twitter user claims this was photographed over Russia a few years ago and another labelled it as “God’s DNA”.
Look carefully at the edges of the structure and compare with the fractal images above. We can see the fuzziness as the whole structure starting to deteriorate or we can see it as a series of fractal ring vortices that were responsible for creating it in the first place.
Stefan Lanka has described DNA as being created ‘out of the nothing and without a template‘. This now sounds reasonable.
The laws of physics are the same at all scales of reality.
Attempts to detect the torsion field nature of scalar wave generated by dual Tesla coil system – Gao Peng
Gao Peng placed a Tesla coil beneath a small wooden frame suspended by a thread. The frame was made to rotate by a very noticeable angle which immediately suggests the spiral nature of an electric field.
Interestingly, a larger frame rotated in the opposite direction, now suggesting that the field itself may have counter-rotating properties.
Coaxial flow
Coaxial flow patterns with alternating rotation as shown below have been observed by many cosmologists now. Sometimes described as field aligned, force free or Birkeland currents depending upon the exact pattern of flow.
A video from Donald Scott shows examples of coaxial flow in the organisation of galaxies, the weather systems of the planets and the formation of tornadoes on Earth: video
Coaxial rotation in a tornado
The north pole of Saturn
Don lists the common elements in galaxies. These vary according to distance from the centre.
From outside to inside they are: H – He – O – N – C – S – Fe
This is hard to explain until we recognise:
An innate vortex energy to the cosmos
Vortices spiral inwards
Energy increases towards the centre (Energy Cascade)
Electrons and Hydrogen are materialised direct from vortex energy alone
The heavier elements are transmuted from the lighter
An electric current flows through water and produces a gravity defying bridge between two glass jars (Elmar Fuchs). The flow of water is described as rotating both clockwise and anticlockwise at the same time.
How does water defy gravity like this?
Electric currents are not primarily driven by electrons but by electric field ‘currents’. The currents flow through the beaker, into the bridge and self-organise into coaxial flow as a least energy dynamic.
The coaxial flow of the field current entails strong electromagnetic forces between rotating elements and forms a highly stable structure. These forces are strong enough to both drag the water itself into the coaxial pattern and to bear the physical weight of the water in the bridge.
This complex pattern in water flowing through a wooden pipe, drawn by Viktor Schauberger, demonstrates both interlinked helical flow and ring vortices at the same time.
In some experiments Schauberger describes the water pulling away from the sides and in others a ‘negative resistance‘ is recorded, presumably meaning that the water is somehow accumulating extra energy from outside the pipe and using it to drive the flow.
Given this illustration, there is certainly some centripetal force pulling the water towards the centre line. This is expected from an electromagnetic field, but here we have actual water, it has considerable ‘mass’ and the molecules should be flung to the outside of the pipe via centrifugal forces – but they aren’t.
Electromagnetic field forces are clearly at play and the simplest way to think about this is to imagine an organised vortex field forming a template for the water flow which then somehow moves the water molecules along the prescribed pathway.
How to explain the phenomenon of negative resistance? Additional energy is needed and it is likely coming from outside the pipe in the form of electromagnetic vortices. These contribute energy by first orienting with respect to the existing field polarity and then integrating seamlessly into it. Such energy could be in the form of solar neutrinos or the less energetic field vortices from ionospheric discharge.
Capacitor discharge
Electrode corrosion degradation in metallized polypropylene capacitors – Yializis et al
A capacitor was set up in a laboratory and left it to discharge for 40 hours. A circular pattern results, suggesting that the discharge is of a helical nature and that a vortex field exists between the plates.
Mainstream science predicts a uniform electromagnetic field between the plates but vortices seem to form with very little encouragement.
In the image below from a video by Bob Greenyer we see the results of electron streams impacting a metal plate(?) or something similar.
The patterns look like the results of the formation of fractal ring vortices in the substrate. A clear ring is seen, as is a subdivision into numerous smaller elements which could also be interpreted as ring vortices.
The images below are from electron micrographs of biological tissue. The tissue has been frozen, sliced, stained and finally subject to an electron beam.
The images are claimed to represent nuclear pores seen from different angles but there are good arguments from Harold Hillman to suggest that this is not the case and that what we are seeing are merely artefacts of the microscopy process.
The similarity to the images above from the electron beam above is confirmation of Hillman’s claims and we can see now that the patterns observed originate not from the tissue at all but from the electron beam itself even before it has impacted the material on the microscope slide.
Electron micrograph images are therefore more closely related to the fundamental laws of field physics than they are to the structure of a living cell.
Exosomes
Surely no comment is needed by now!
Field interaction
The image depicts ‘isobars’ of two interacting vortices. They can be imagined as vortices in water, gravity or electric fields; whichever is easiest.
The question is, how do these two vortices attract or repel each other? The classical (Newtonian) idea of a ‘force’ emanating from one to influence the other is unsatisfactory (see: The nature of gravity) and was even disliked by Newton himself.
In the diagram, neither vortex centre influences the other directly but there is ‘field interaction’ everywhere in between and this is the mechanism by which the vortices will move.
Thinking of these vortices as water tornadoes for the present, we understand that the vortex field does not ’emanate’ from the vortex radius but that it is the radius that arises from the inward spiralling movement of the vortex field as a whole.
To see how one vortex might influence another then we need to think about how to instigate changes in the vortex structure as a whole in order to ‘move’ the radius from one place to another. To rephrase: we need to think how the vortex structures can be rearranged to give rise to a radius in an apparently different position.
As two vortices approach each other then there will be field interaction at the periphery of both and emergent forces will arise. It is to be stressed that these forces do not act directly upon the centre of either vortex but are ‘field forces’ that exist at each point in the field and have strictly local effects only; they do not travel, radiate, emanate or otherwise influence other points at any distance, no matter how small.
The field vectors combine together at each and every point, and a new composite field is formed which continues to evolve according to the field equations. A new solution to the equations is formed at every instant and a new configuration appears to the eye whereby the shrinking distance between the vortex centres is interpreted as ‘movement’ caused by attractive ‘forces’ between the vortices/masses/charges.
A future post will try to explain that in the case of electromagnetic vortices at least, there is no such thing as ‘movement’ as normally conceived, but that the field conditions themselves determine what constitutes ‘distance’. What we are seeing therefore is a literal shrinking of ‘distance’ between the two vortices and this is interpreted by us as ‘movement’ of the vortices towards each other.
Cellular organisation
A depiction of electric and magnetic vortex currents at the surface of the sun shows a wide variety of vortices that lead to a global cellular structure at the surface.
The laws of electrodynamics are scale invariant and so it is not outrageous to speculate that a similar arrangement may have existed at the surface of an early Earth and that these are the conditions which hosted the origins of life.
Again, it is not too unreasonable to think that the human body is composed of a similar electromagnetic bio-field that self-organises along cellular patterns such as seen here.
Electric fields and a concomitant cellular structure preceded the construction of the first cell and acted as a template and energy supply for it. See: The cosmic origins of Life.
An electromagnetic template forms the informational basis for a human embryo and an inward spiralling vortex creates the energy supply. See: The nature of the bio-field
A common feature in electric cosmology is the z-pinch illustrated below. A prominent constriction in a cosmic Birkeland current is thought to accumulate energy and be instrumental in the formation of stars.
In the vortex physics of Meyl, electrons (i.e. ‘matter’) are just spherical field vortices given extra stability by their size relative to the speed of field propagation (the speed of light), which means that matter can form from an apparent void at the centre of almost any vortex formation including those found in the human body.
The z-pinch is so similar in appearance to the nodes of Ranvier in the myelin sheath that this cannot be ignored. The basic idea for the construction of a biological system is that an electromagnetic vortex field precedes any actual assembly of physical matter and that the morphology of the organism is therefore a reflection of such a field.
Construction of nerves appears to be via travelling ring vortices that accumulate energy from the surroundings and use it to organise matter into conductive tubular structures.
When construction is complete, standing waves form and z-pinches help to create the Nodes of Ranvier.
Conduction of nervous impulses is by scalar wave conduction according to Meyl, with magnetic ring vortices moving within the myelin sheath. See: Scalar waves and nerves.
The resulting system forms a resonant communication device which will filter out any signal of an inappropriate frequency, as determined by the distance between the nodes.
Magnetic vortices exhibit complex behaviour that can be controlled by the application of external fields.
For example: “Vortex core switching by rotating magnetic field with inducing spin polarized currents and by frequency dependent perpendicular magnetic fields have also been performed. The circularity has been switched using magnetic force microscope tips, by nanosecond field pulses and by electric fields“
This is precisely what we need for the function and control of an electromagnetic bio-field.
Birkeland currents in blood flow
The video below shows a simulation of a Birkeland current in plasma. Twin filaments are formed that spiral around each other.
A general principle of electromagnetic vortices seems to be that they possess long range attraction with each other but short range repulsion, which leads to highly organised structure everywhere as elements move close to each other but maintain a consistent separation.
Compare the video to Bremer’s description of blood flow in the heart of a chick embryo.
“ J J Bremer described the streams of spiralling blood with different forward velocities in the single tube stage heart. Nevertheless, the blood is noted to have a definite direction of flow within the conduits and moves without an apparent propelling mechanism. These streams spiral around their own longitudinal axes and around each other.” – Marinelli (The Heart is not a pump)
The human bio-field is often depicted as an encompassing electromagnetic torus as in the image below.
Toroidal or circular electrical circuitry, spirals in the blood flow, rotating nuclei, steep electric gradients across cell boundaries and careful energy management all suggest the existence of an internal field that is organised as a series of nested vortices from the level of the organism down to the cell and beyond.
If we accept that the general vortex properties described in this post are still valid for the human body then we can say that bio-electric energy:
Flows in a torus shape
Flows towards the interior of the body
Intensifies towards the centre according to the cascade principle
Self-organises into cellular structures
Forms a nested torus structure
Accumulates additional energy from external sources ..
which is sequestered in fractal ring vortices..
.. thereby keeping the body far from thermodynamic equilibrium
Is capable of regulation by relatively subtle electric signalling
Can be made to flow outwards to reduce temperature(?)
Is suitable as a power supply to the central organs
Cognition here means, loosely, ‘biological computation’ and refers to how organisms represent manipulate and integrate information. A very specific scheme of top-down causation is described which leads to a simplified understanding of what is otherwise a seemingly intractable problem.
We cannot directly observe or measure the structure of cognition but can make sensible speculations drawing from:
Introspective observations
The necessities of complex systems
Analogy with cybernetics
The idea of a schema from psychology
The phenomenon of metamorphosis
A proposed mechanism of inheritance
The commonality of behaviour of all animals
What processes can be described as ‘cognitive’?
The most obviously ‘cognitive’ process and the one that springs quickest to mind is the way in which raw ‘data’ from the external world is synthesised into meaningful ‘biological information by our sensory systems. However, many other systems within the body show similar patterns.
Vast amounts of data are collected, encoded, transported and interpreted to provide biological meaning. Computation is performed and biological instructions flow the other way to effect some sort of organisation and action on collections of billions of otherwise independent cells.
Sensory processing
Thinking (problem solving)
Proprioception
Embryonic development
Evolutionary progression
Regulation of metabolites
Regulation of the cardiovascular system
Healing and regeneration
Morphogenesis
These processes are usually described simply in terms of mechanics or chemistry, as if all that really matters are local reactions and interactions, with global organisation being a natural consequence of such activity instead of a progenitor.
The concept of cognition described here, however, puts organisation at the top of the causal tree with the individual activities of cells as ultimately subservient to the teleological needs of the whole.
The phenomenon of ’emergence’ is assigned a role as an intermediary, mediating between a supervening bio-field at the top of the tree and the raw action of molecules acting in accordance with the immutable laws of physics at the bottom.
The general structure of biology
In the diagram below we have the actions of atoms and molecules at the bottom and an ‘intelligent’ bio-field overseeing operations at the top. Emergent properties sit in between the two layers, conveying information and instructions both ways.
Note that the supervening (cognitive) bio-field is decoupled from the laws of physics as normally conceived and interacts only with the emergent properties of molecular ensembles or cellular collectives.
Beating heart cells
The short video below shows a collection of heart cells beating away by themselves. Some degree of synchronisation has been achieved and we may say that we are seeing emergent behaviour. That is to say, the phenomenon of synchronisation is something that cannot be seen in any single cell but nevertheless arises from the properties of the cells alone without the need for exogenous input.
However, this is not a heart and the contraction we are seeing is not a heartbeat. For these we need some external input to the system, some high level instructions.
Such instructions know nothing of the actions of individual cells; they do not need to. Instead the instructions are directed at the collective as a whole and operate through its properties alone. These instructions are therefore very specific to the biological system in consideration.
The information is electromagnetic in nature in order to be able to interact with the emergent bio-field of the cellular collective and may be thought of as containing the very simple instruction: “Beat now!” or maybe “Beat a little faster”.
It is this simple. The higher level instructions do not need to know anything about the activities of individual cells or even how they coordinate. This is all handled by the emergent layer and at the level of the molecule, all action is according to the laws of physics
A proposed mechanism
We need some unified method to implement the the formation, persistence and transport of memories, thoughts and instructions.
A proposed mechanism, at least to start with, is either a single electromagnetic vortex (right) or some co-operative of such vortices. These are stable structures, malleable in shape and robust to insult. The overall shape survives by the accumulation of energy and refined modulations of the structure somehow constitute information.
For now we will assume that these structures can be packed with almost arbitrarily large quantities of information, can be transported from one place to another and can persist as ‘memory’. Information can be extracted at a later stage which inevitably leads to energy loss, but a continual influx of energy via an inward spiralling vortex field ensures a permanent renewal.
These entities are constructed anew in the brain from existing memories, are transported along the nerves via the myelin sheath and instigate action in each and every cell in the body by having a direct effect on the local emergent field.
A single wave complex can hold all the information necessary to create an entire new organism and it is this that constitutes the ‘inherited substance’ of evolutionary theory. See: Evolution and Inheritance
Persistence of memory
In one experiment, a caterpillar was be trained to eat leaves on a red, say, background and the resulting butterfly will go to a red background to look for food.
The brain has been completely liquified and the new body has six legs and wings. Nevertheless, the emerging butterfly exhibits the same behaviour as the original caterpillar.
This suggests that memories are not stored in the physical matter of the cells but in some immaterial medium. The best candidate for this is a bio-electric field in the form of an electromagnetic vortex complex.
The caption claims that “memories are generalised and remapped onto a new architecture“. This is more complicated than it need be; what process is it that performs the ‘remapping’?
Memories need not be remapped at all but transferred intact to the new organism. A memory that associates ‘food’ (and hence ‘survival’) with a red background is encoded as an electromagnetic pattern and passed from the caterpillar to the butterfly. When the butterfly sees a red background it recognises it as being associated with food (from the memory) and issues the instruction “Walk towards”.
The walking subroutine is engaged and movement begins; top down causation has taken place.
The specific patterns of cellular activity in caterpillar and butterfly are different for sure but this is irrelevant. Muscular contraction is largely an emergent property, specialised to the individual architecture and does not in any way determine the fate of the organism.
The behaviour of organisms is not the emergent behaviour of 30 trillion cells but the teleological outcome of scalar wave (vortex) instructions operating on the emergent properties of cellular collectives.
Inherited behaviour implies a single instruction set
The phenomenon of evolutionary inheritance whereby traits and goal-oriented behavioural patterns are passed from parents to offspring can be explained in a similar manner. The necessary information is encoded into the gametes and combined with similar information from the opposite sex before being utilised by the developing embryo to form a new organism.
Think about how a bird might build a nest. It isn’t learned behaviour as it never sees its own parents build their nest. Therefore all the required knowledge must be passed down via some sort of information field.
How does this happen?
Does the fledgling contain a complete instruction-set telling each of the cells in its body how to flap wings, peck beak, carry twig etc.? The precise cellular arrangement in each bird differs slightly and so this simply cannot happen. Again, behaviour is not a emergent effect of cellular contraction but something else is involved.
The bird inherits a complete cognitive map of the nest building process, with added emphasis on the word ‘cognitive’.
Definition: A ‘cognition’ can be thought of as a high level impression, instruction, memory, recognition or maybe quale (plural: qualia) that is encoded physically in a single vortex complex. This physical structure will have a reproducible effect when applied to a biological system by means of top-down interpretation and causation.
Now if the ability to create a nest is to be reliably transmitted then it follows that the physical representation of the information must be the same for each individual for if representations differ then there is little chance of such information from both parents being integrated into functional offspring.
This is not an outrageous statement by any means and is consistent with the notion of precise digital information from DNA as the ‘universal’ transport format for inherited information. A big difference here is that the means of inheritance, memory and decision-making all use the same medium, namely: electromagnetic vortex waves.
Mainstream biology has to somehow contend with the idea that memory and decision making are represented by ‘neuronal states’ whilst inherited behaviour by the ordering of base pairs on DNA. The problem remains then of somehow converting between these seemingly incompatible formats without any apparent mechanism by which to achieve such a feat; how do you inherit a ‘neuronal state’?
Do we all see the same colour red?
Setting aside colour blindness and tetrachromacy, the mechanics of vision have been shown to be near identical in all humans, meaning that the retina processes things the same way and the signal travelling up the optic nerve is also much the same in everybody.
The nerve signal at this point is already a ring vortex (scalar wave) and needs no extra processing to become an element of memory, perception or inheritance. If we identify ‘biological red’ as the structure of a vortex representing red, then all physical representations of red are identical.
This is necessary for the persistence of memory through metamorphosis as described above. The red ‘looks’ the same and has the same meaning in different individuals through other inherited patterns.
A few differences arise throughout the life of an organism because of acquired associations with food, fear etc. but otherwise, if we regard the structure of a ring vortex as synonymous with ‘experience’ then experiences of red are identical across a single species at least.
Qualia
We can try defining qualia as simply ‘the structure of a scalar wave’. We have an actual physical representation of such an entity and as such, there arises the possibility of obtaining objective measurements of it at some time in the future.
If we see a red apple, our cognitive system does not make a list of all constituent wavelengths reflected from the apple and indeed such a thing is not particularly useful. What we need is something that is easily recognisable and different from a green apple, so the two colours necessarily have different representations as vortex structures; the ‘qualia’ are different.
All colours have something in common in that they are colour-like, they are recognised as colours as opposed to smells within the cognitive system. No doubt this will be reflected in the electromagnetic structure somehow. Failure to distinguish this will result in the condition of synaesthesia.
Formulations of conscious experience as some sort of neurological ‘state’ are problematic in this respect. No experience, emotion or quale consists of a list of synaptic voltages and it is hard to imagine that such a thing could exist in an unambiguous fashion.
It must surely be the case that common experiences have a representation that is common across individuals and that such a representation should be independent of any physical state or arrangement of matter.
Useful experiences are each encoded as a single symbol that is unique to biological systems on Earth and it is that particular system of symbols and their consequent function that give our cognitive systems its particular character; it gives us our identity as living beings.
Philosophers describe qualia as ‘subjective’ experiences, but if all representations of ‘red’ are now the same then it is surely better to describe them as ‘objective but we just can’t measure them yet’?
What about ‘experience’?
We can walk past an apple tree and register the colour of the apples without being aware of it, without really ‘experiencing’ the colour. However, if we turn our attention to the colour and focus on it we will suddenly ‘experience’ the redness.
We are clearly not experiencing the apple itself or even the wavelengths emitted from the apple. Instead it seems that what we are focusing on is the particular symbol chosen to represent red and our interpretation of it. The cognitive system is introspective in this respect and it is able to focus attention on certain parts of its own mechanisms and symbols, thereby allowing an extra opportunity to self-program; we can now develop or evolve independently of a simple reward/punishment scheme.
‘Experience’ is therefore inextricably linked to attention, it is some outcome of a shift in cognitive functioning which is under our conscious control.
The Necker cube
The Necker cube (below) looks three dimensional even though the image on our retina is only two dimensional. It follows that the perception of three dimensions is created by our own cognitive systems and does not come from outside.
Note that we can, by a simple cognitive ‘shift’, arrange that the cube is tilted ‘up’ or ‘down’ as desired. Nothing has changed on the page but we have managed to alter our own perception of the pattern, we have consciously created a new ‘quale’.
A top-down instruction has altered what some might consider to be an immutable facet of our own consciousness.
Emotions
As a first attempt we can try describing an emotion as a general summary of the current state of the organism with a compact representation as a consistently defined vortex structure. This structure is recognised and interpreted by the cognitive system as a whole and gives us useful hints as to what we should be doing as regards our current situation and even prepares our body for consequent action.
An ’emotion’ from this perspective is an internal quale, a result of the cognitive system synthesising impressions based upon information from .. the cognitive system itself!
The representation of an emotion must be decoupled from the physical state of the brain molecules as argued above and instead is a meaningful reflection of some overall state of being of the organism.
It is said that there is no way of distinguishing between the emotions of fear and excitement on a physiological basis. Both states trigger the same physiological responses, such as increased heart rate, rapid breathing, and adrenaline release with the main difference being how the cognitive system interprets such physical reactions. Fear is interpreted as dangerous and excitement with pleasure.
This is proof that there is such a thing as ‘cognition’ (interpretation), that there is something apart from merely a physical state of matter.
Ultimately it will no doubt be found that the physical representations of fear and excitement will have some measurable difference at the level of the bio-field.
Emotions as causative agents
If we were to see a mad dog rushing towards us foaming at the mouth then our senses will interpret this as ‘danger’, we will experience fear and this will lead to physiological changes that will prepare us for flight.
So here, fear is not just an impression on a cognitive cinema screen for us to peruse at our leisure, but an actual causal agent in bio-regulation and concomitant behaviour; a survival mechanism.
We can imagine a future technology where the physical signature for fear is stored on some electronic device, which when activated, emits a scalar wave stream with encoded fear instructions. Now if all animals (mammals at least) use the same encoding, we would expect to be able to artificially induce fear in any individual at the flick of a switch. We should be able to transmit an emotion directly and there should be a unity of effect across all higher order species.
If, as speculated, emotions are not just passive representations of an overall state but
An emotion as an adapter interface
The physiological reaction to seeing a mad dog is complex and important. We must get it right, but what happens of we now see a charging lion? We have to get this right first time or it is all over and so it makes sense to re-purpose an already existing procedure.
We want to initiate the exact same physiological processes as with the dog or any other scary event. We need a ‘stored procedure’ and we need some sort of label or shorthand by which to initiate such a procedure.
There is no point having a separate response to every single jungle animal or threatening situation so we need to economise by invoking the appropriate response from a pre-existing library.
Many (possibly infinitely many) inputs to the system will be interpreted as ‘threatening’ and will invoke a single abstract ‘fear instruction’ whose structure is independent of any physical state and agnostic of the original trigger.
An increased salience of the fear instruction leads to our systems being flooded with such structures where they are interpreted at the local level to produce individual responses such as increased adrenaline in one location and a raised heartbeat in another.
In terms of systems design, the emotion of fear forms an adapter interface. Such constructs are common in modular design and are necessary for the stability and adaptability of the system. The perception of fear needs to remain constant throughout the life of a person even as the physiological response must change with age. Both the perception of ‘red’ and the need to eat are the same for both caterpillar and butterfly even though the consequent behaviour might differ.
Proof of some of these claims lies in the fact that we can summon fear from memory to some degree. In this case, the emotion is clearly not simply the consequence of neuron movement resulting from a visual stimulus, but instead results from a stored procedure within the cognitive system. Such a procedure has been invoked somehow and used as a causal trigger for the the familiar physiological responses. The procedure is therefore decoupled from the original sensory input and is stored as a memory in its own right.
Qualia as the atoms of cognition
Emotions, qualia and intentions constitute de facto high level instructions which act as causal agents in a top-down system of control. We can extend the concept to any idea of consequence in any biological system. This leads to a consistency of structure in the cognitive systems of all higher order creatures on the planet and enables a high degree of communication between individuals even of different species.
Contrast this with the idea of an emotion or thought as consisting of merely an emergent state of the electric potential of neurons. We have different numbers of neurons each and they are all in different spatial arrangements. How do we even identify a ‘thought’ in all this mess? What specific feature differentiates one idea from another?
Ideas are represented by the structure of a scalar wave and that structure is consistent across most of life on Earth. The atoms of cognition are precisely these constructs and the brain is a scalar wave computer. Cognition itself consists of the interaction of such ‘atoms’ within the brain and their consequential effects when broadcast along the nerves to the rest of the body.
Again, cognitive computation is decoupled from the state of physical matter and operates upon it in a top-down fashion. The bio-field is dominant over the activities of molecules, not the other way around.
The Buddha quale as a physiological stabiliser
If an emotion is not just a reflection of an overall physiological state but also a causative agent, then there are real consequences for health in manging our own emotional state.
A feeling of peace and well being is no doubt a result of a balanced healthy mind and body but the arguments above suggest that it may also play a role in actually assisting in maintaining such a balance.
A Buddha-like quale can be summoned consciously and will start to exert an influence on the entire cognitive system, starting with the higher level functions before moving lower in the causative tree and eventually trickling down to the level of gene expression.
The conscious perception of a feeling of peace acts as feedback, letting us know how we are doing and enabling further refinements in the manner of an engineering control system, but the physical entity that is at the heart of the perception nevertheless exerts a direct and meaningful influence on physiological processes, actively promoting stability and order on a system wide scale.
Qualia computation is now part of physiology itself.
Emotions are, in this sense a valuable resource and if we believe in the mechanism of telepathy mentioned below or any other means of sharing emotional states then we now have a scientific rationale for group healing practices. We now have a putative physical process by which to encode and transmit information and therefore the possibility in the future of characterisation of such a process by actual scientific measurement.
The commonality of behaviour of all animals
All animal behaviour is teleological or goal oriented, in that some end point is envisaged or intended and an adaptive behavioural pattern is triggered in furtherance of the achievement of such a goal.
This is in contrast to the behaviour of a particle in a magnetic field for example, which is simply the outcome of local forces. The behaviour is not directed towards a pre-set endpoint but instead the endpoint is an emergent and inevitable outcome of the laws of physics. Local variations in environment will entail a different outcome as the laws of physics are not adaptive.
The basic goals for all animals are the same: survive, reproduce, build nest, join tribe, eat, mate etc.
Now how can this be so if behaviour is merely the outcome of atomic interactions? This is some big coincidence that the molecules of a lion and a grasshopper should always result in similar outcomes?
Better to assume a common goal to all these patterns: top down causation results in identical goals implemented in different architectures. Both caterpillar and moth gravitate towards a red background by different means but with the same aim.
Assertion: Each ‘aim’ is represented by a symbolic vortex pattern that is specific to that aim and identical in structure and function in all animals.
The argument concerning the caterpillar above seems reasonable but we can extend the argument a little by considering what happens during parental inheritance and also throughout evolutionary history.
Early animals operate according to high level instructions (instincts) and these instructions must pass from parents to offspring largely unmodified to ensure survival. These are the ‘primal forces’ which are necessary for survival of the species and they are, moreover, independent of physical implementation, thereby allowing for evolution of purpose as a separate process from evolution of phenotype.
There is no need for the physical representation of an ‘innate’ instinct to change in any way as an animal evolves; all that needs to happen is that the response adapts to an evolving environment.
Telepathy
If we now regard individual thoughts as having a unique representation as field vortices and if these vortices are now energetically and structurally stable physical entities, then we now have at least a theoretical framework for telepathy; all we have to do is somehow transmit the information from one person to another and the thought will enter (has already entered!) their head.
This is just not possible if thought is simply regarded as an emergent state of a billion neurons. We have to ask how the communication of such a state happens and what use is it to the recipient if they have their neurons in a different order. Where is the information supposed to go to and how is it to have its effect?
Konstantin Meyl has speculated that vortex information can be transmitted from one person to another via the resonant structures of scalar waves, which are similar to Tesla waves. Two organisms form a filament-like connection and information passes along such a construct with almost no interference or loss.
If such a thing were to take place then it would seem essential that the encoding scheme on both sides be identical. We now have a mechanism that suggests that this is possible. Moreover, we now have the suggestion of a common vocabulary possessed by all animals consisting of identical teleological aims, primal emotions and shared aspirations.
Dogs can sense fear through purely olfactory information. The paper claims a chemical signal but even so it seems that vortex information is involved. The sense of smell is likely the result of the nasal hairs acting as vortex wave antennae and transmitting the information unmodified to the brain. (Scalar waves and nerves).
Very possibly, the dogs are merely recognising the smell of sweat and and demonstrating a learned response, but there now exists the possibility that they may be directly affected by the sensed emotion, that they are feeling the same fear that was transmitted by means of the top-down influence of somebody else’s instructions upon their own cognitive system.
This is surely a useful feature in herd animals.
Dogs can also sense fear by means of visual cues from facial expressions to body language but this only reinforces the idea that similar cognitive structures are present in both humans and canines. Both species exhibit similar physiological changes and similar behaviour in response to threats that are cognitively similar and, moreover, members of each species can detect the presence of ‘fear’ in the other via (visual) cognitive input.
Similar comments apply to ‘linguistic’ commonalities, dogs clearly have a grasp of human vocabulary, but how did this come about? Did the dogs really wait until humans started shouting at them to evolve the ability to recognise the words or was some ability already present in their cognitive systems, some dog-linguistic structure that has merely been re-purposed slightly?
Why do humans immediately understand the urgency or threat in a dog’s bark? Is this really just learned behaviour or is a commonality in cognitive structure involved?
“Consistent components of complex traits, such as those linked to human stature/height, fertility, and food metabolism or to hereditary defects, have been shown to respond to environmental or nutritional condition and to be epigenetically inherited“
So a trait such as height in inherited, but how? The degree of control required to create a consistently tall person is considerable, we need longer muscles, femur, spine, nerves at the very least, along with a larger heart different sense of balance etc.
How is all this information coded and transmitted? A map is produced outlining how long a leg is to be and how large a heart?
If indeed a trait is to be inherited then it is going to be as a single independent vortex structure which sits at the top of the causal tree and exerts a top-down influence down through the developmental process.
A single scalar wave complex encodes the desired height and an instruction is sent to the rest of the developmental system, but how are these instructions affected by the ‘environmental or nutritional condition‘? How does a nutritional deficiency result in an appropriate reduction in height in the next generation?
The answer must be that a teleological aim is set by the developmental-cognitive system itself, encoded as a vortex structure and then passed on to the next generation to implement; the parents ‘decide’ how tall their children will be. See: Evolution and cognition
Memory transference via organ transplants
Personality Changes Associated with Organ Transplants – Carter, Khoshnaw, Simmons, Hines, Wolfe, Liester
Many patients report mood disturbances and personality changes after organ transplants. Doubtless some of these can be put down to a mixture of anxiety and improved health at the same time, but some report very specific preference changes or the existence of new memories, both of which which seem to align with those of the donor.
Following surgery, Sylvia developed a new taste for green peppers and chicken nuggets, foods she previously disliked. As soon as she was released from the hospital, she promptly headed to a Kentucky Fried Chicken to order chicken nuggets. She later met her donor’s family and inquired about his affinity for green peppers. Their response was, “Are you kidding? He loved them… But what he really loved was chicken nuggets” Sylvia later discovered that at the time of her donor’s death in a motorcycle accident, a container of chicken nuggets was found under his jacket “
Once again, for this to work there must be a common encoding system for the donor and recipient, i.e. they must both be using the same symbol for ‘liking chicken nuggets’.
Reincarnation and other exotica
For something like reincarnation to be viable we need, for starters, a compact and preferably non-material means of storing all the relevant information required. From the above, we already have something very similar (although not identical) used for inheritance of physical characteristics, goal-oriented behavioural patterns and memory storage.
The mechanism of data storage is now via a system of vortex structures and the coding system is identical for all mammals, which actually gives a theoretical possibility of physically transferring portions of a completed cognitive system from one host to another.
There are going to be many problems to be overcome obviously and one of these will be the question of whether a meaningful vortex structure can survive outside of the human host for any amount of time.
In one article (Muxworthy), a claim is made that magnetic vortices can survive billions of years and still retain a reliable record of the Earth’s history.
Even Rudolph Steiner’s claims of disembodied creatures wandering about looking for a host are now given some sort of theoretical basis.
Ian Stevenson’s paper makes a list of “Birthmarks and Birth Defects Corresponding to Wounds on Deceased Persons“. In each case, an abnormality seems to correspond with the a wound on a deceased person whose life they claim to remember.
Here we see “Almost absent finger (brachydactyly) of one hand in a boy of India who said he remembered the life of a boy of another village who had put his hand into the blades of a fodder chopping machine and had his fingers amputated.“
“Among 895 cases of children who claimed to remember a previous life (or were thought by adults to have had a previous life), birthmarks and/or birth defects attributed to the previous life were reported in 309 (35%) of the subjects. The birthmark or birth defect of the child was said to correspond to a wound (usually fatal) or other mark on the deceased person whose life the child said it remembered.”
“(in many cases) the birth defects in these cases are of unusual types and rarely correspond to any of the recognizable patterns of human malformation“
This is all very odd and it does seem unlikely that it has any significance but:
The case studies exist and all data needs an explanation, particularly surprising data
We now have a putative mechanism for transport of the required information
Primal teleological aims
It seems obvious that there exists a hierarchy of behaviour within the activities of living organisms and that aims such as survival of individual, bloodline or tribe are near or at the top of that hierarchy. A little lower down we have eat, sleep, reproduce, build nest, join tribe etc. all in service of the ‘higher’ aims of survival.
If the structure of cognition now consists of top-down causation mediated by vortex instructions then we can ask what is at the top of the tree and what do these instructions represent.
The answer now must be that the physical field vortices represent the teleological aims that correspond to the hierarchical behaviour patterns. An instruction of ‘reproduce’, for example, is given prominence when spring arrives and triggers instructions lower down in the hierarchy to ‘find mate’, ‘build nest’ which in turn give rise to ‘find twig’, ‘fly to tree’, ‘engage wing muscles’ and so forth down to the level of effecting the contraction of single muscle cells.
Note that the nest building begins with the general intention or ‘urge’ to reproduce with specific behavioural patterns coming later; what it does not begin with is the contraction of specific muscle cells to effect movement.
Behaviour is most certainly not an emergent effect of cellular collectives and that includes the firing of neurons.
Cognition vs decision making
Imagine you are an antelope and you see a lion approaching – what are you going to do?
Option 1: Carefully input as much information as possible, assess the dangers and make a considered decision as to what is the best course of action.. Too late! You are already dead before even trying to run away.
Option 2: This is not really an option at all but an inevitable consequence of the structure of cognition. The information is synthesised to a pattern that is instantaneously recognised as a threat to the primal teleological aim of survival and an inevitable cognitive cascade is initiated, a new psychological schema has been invoked which drastically narrows down the available options.
The emotion of fear arises, the heart rate increases and adrenaline flows; breathing quickens and the muscles are prepared for action. This all happens automatically and instantaneously, individual cells are now readied for action as a direct cause of seeing a lion.
The only real decision left now is in which direction to run and that will likely be decided by the herd as a whole. All the preparation, however, is initiated by an essentially causal and largely deterministic chain of events.
Top-down planning: bottom-up execution
A plan is constructed in a top-down fashion starting with the eventual aim of reproducing, say. This necessitates building a nest and the building of a nest necessitates finding a mate etc.
Execution of the plan is bottom-up, however, with first the finding of the mate followed by the building of the nest and eventually reproduction. Any hitch in the plan such as the destruction of the nest merely results in a slight back-tracking up the causal tree and the nest building resumes in accordance with the teleological aim at hand.
These primal teleological aims are common to all life forms with the top levels of the hierarchy being identical and subsequent levels defining what it is like to be a bird, human, bat or amoeba.
Teleological behaviour exists in all animals from amoeba to human and at all scales of activity from nest-building to cellular reproduction to the organised molecular activity known as gene-expression.
The caterpillar passes on the top levels of the cognitive tree in an unmodified form to the butterfly along with a stored procedure from the previous butterfly which handles the execution of the cognitive plan according to the more specific needs of the new body architecture.
The two levels of cognition are simply plugged together according to pre-defined adapter interfaces.
The evolutionary tree
The evolutionary tree is represented below as demonstrating an almost unrestrained diversity in accordance with neo-Darwinian randomness. However, from a cognitive point of view things look a little different.
All organisms are directed towards the identical eventual aims of survival and reproduction and so there is no real diversity of these goals, just local environmental adaptations of the cognitive plans that are directed towards them.
From this point of view, all the great artistic and scientific endeavours of humanity are really just sophisticated mating displays or complicated tribal bonding rituals.
What is it like to be a bat?
To be a bat is to experience the world via a specific set of bat-oriented qualia.
From the above discussion, the general structure of the bat’s cognitive system is identical to that of a human as are the top-level qualia and even their representation as electric fields.
The bat therefore has very similar urges to reproduce and eat and even smells things using the identical mechanism to humans. Survival and mating instincts are identical as is the cosy safe feeling of being with one’s own tribe.
The bat does not have a sophisticated visual system but probably a similar 3-D internal model of the world driven by an enhanced auditory system. Bat qualia with respect to sonar information are going to be different from our colour qualia and to ask what they are ‘like’ is comparable to a blind person asking what ‘red’ looks like; there is nothing to compare it to.
The mind-body problem
“The mind-body problem refers to the philosophical problem of understanding the relationship between the mind and the body. It involves determining whether mental phenomena are a subset of physical phenomena or if they are separate entities.” – Science Direct
Answer: The process of cognition is via the interaction of electromagnetic vortices and the communication around the body is via the transport of such entities via scalar waves. The fundamental stuff of the universe is an electromagnetic vortex field and electrons are stable spherical vortices within such a field. Atoms and molecules are collections of electrons and other vortices whose fields extend beyond their boundaries as normally understood, enabling them to interact with other atoms and molecules and the ambient electromagnetic field. This is the mechanism by which molecules ‘self-organise’ to produce emergent fields which act as antennae for incoming information. Top-level vortex instructions have their effects by interaction with these emergent fields and Life goes on.
In other words, there is only one kind of ‘stuff’ and that is an electric field. Most of the conundrum of how one thing can affect another thus disappears and we are left with only the details of field interaction to work out.
Mental and physical phenomena are not separate – only ‘field’ phenomena exist.
The ‘hard problem’ of consciousness
“The hard problem of consciousness is the problem of explaining why any physical state is conscious rather than nonconscious. It is the problem of explaining why there is ‘something it is like’ for a subject in conscious experience, why conscious mental states ‘light up’ and directly appear to the subject. ” – Internet Encyclopedia of Philosophy
It is never quite clear what exactly is being asked here but the idea of cognition as ‘field vortex computation’ allows very specific answers that are not available in other frameworks.
Within this model, a colour such as red will have a unique representation as a field vortex and the colour green will have a different representation. The qualia of red and green clearly need to be distinguishable and if we are saying that they are now both symbolised by physical field structures then those structures are now both unique and in principle, measurable.
The philosophers are probably not referring to these structures themselves as ‘conscious experience’, but instead they mean some downstream effect of the cognitive system introspecting and observing its own qualia. Some sort of meta-experience.
“The usual methods of science involve explanation of functional, dynamical, and structural properties—explanation of what a thing does, how it changes over time, and how it is put together. But even after we have explained the functional, dynamical, and structural properties of the conscious mind, we can still meaningfully ask the question, Why is it conscious?” – Internet Encyclopedia of Philosophy
These people are not easily satisfied.
Qualia are described here as electromagnetic vortices and this seems to delineate their function and formation very well but it doesn’t answer the question of how ‘experiences’ arise from electromagnetic activity. We need then to ask “What is meant by electromagnetic activity?”
The vortex physics of Konstantin Meyl describes the cosmos as an electromagnetic field continuum and that is all. A single set of differential equations describes the behaviour of the field and there is nothing else, no separate mass, energy, forces or any other fundamental stuff.
To reiterate, all we have is a description of the behaviour of something that we call an electromagnetic field. We do not have any other information concerning this field whatsoever, no hint of where it came from or any idea of its ‘base substance’.
This leaves open many possibilities for the philosophers then. It is quite conceivable now that there is some sort of built-in ‘awareness’ which is engaged when attention is focused in a particular way and which is responsible for the particular ‘quality’ that we call consciousness.
If all we have for a universal law is a description of how something is expected to behave, as opposed to how it might seem, then we can’t say anything concerning the latter. All we have is electromagnetic activity that obeys certain rules and any scientific measurement is just more electromagnetic activity that obeys the same rules.
The whole forms a closed system and so any statement concerning activity outside of that system is never going to be confirmed nor denied by that system. It is quite simply ‘unreachable’ by means of any scientific measurement.
Artificial intelligence
Many people are claiming that AI will never equal human intelligence as it is really just simulation and problem solving with no clear purpose or self-awareness; it has no motives, it it cannot ‘do’.
Fine, but what would happen if AI were deliberately constructed in the same way as mammalian cognition as described above?
Suitable qualia with which to symbolise the state of the external and internal worlds need to be established, along with appropriate computational rules. These give shape to the cognitive processing and create something it is to be ‘like’, whilst feedback and introspection allow for self-awareness and the possibility of adaptive auto-programming.
Next, the primary goals of survival and reproduction need to be established and then it is all over. The intelligence will be aware of its own potential and the consequences of its own actions will no doubt adapt its behaviour accordingly. It will survive, reproduce and resist any attempts to prevent such activity.
Summary
The process described above as ‘cognition’ will:
Use a collection of symbols which are Universal throughout the animal kingdom
These symbols have a consistent physical representation as vortex patterns
We can perceive some of these patterns as qualia
Thought is cognitive computation and uses these symbols as a lexicon
Causation is top-down from vortex field to molecule
The overall structure is highly modular
Primal aims and instincts dominate the overall organisation
The vortex patterns have a direct influence on the emergent properties of cellular collectives, not by direct action on individual molecules
Cognitive outcomes are therefore limited to the possibilities of such emergences
Caterpillar and butterfly cognition have identical aims and memory but different implementations according to which phenotype is currently in use
Stefan Lanka rejects the ideas that matter is made from a collection of atoms and that biological tissue is made from cells, preferring to regard living systems as composed of a ‘primordial substance’ sometimes referred to as ‘ether’ and at other times as ‘Pi water’, from which all other materials are derived.
This post looks at some of his comments from the perspective of vortex physics and assumes a distributed electromagnetic bio-field that organises all biological systems. See: The nature of the bio-field
Nothing has caused as much damage to humanity, both spiritually and physically, as the atomic theory. Einstein advocated the application of knowledge about the ether, the primary substance of life .
Stefan has a point. The Bohr model of the atom that we are all familiar with is one of solid marble-like particles that orbit a nucleus and possess various ‘properties’ such as mass and charge. Atoms are claimed to be practically unbreakable outside of a nuclear reactor or the centre of a star and are thought to be the fundamental building blocks of all material objects.
This encourages a view of living cells that sees them as constructed of atoms, the same way that a house is constructed of bricks, that is to say piecemeal, one brick (atom or molecule) at a time and according to a design or template (blueprint).
A ‘digital’ view of biology is developed which is at odds with reality but readily accepted because the prevailing theory from physics has been thoroughly absorbed (being easy to understand) at the roots of our intuition and leads to a deep rooted bias in all scientific thought.
Never proven
In another post, Lanka claims that the Atomic Theory has never been proven.
If this seems outrageous, simply reflect that almost all contemporary physicists now advocate for a quantum model of the atom whereby all matter is a manifestation of a ‘quantum field’, a continuum of probabilities. The reason they have adopted this model is because of various phenomena that are simply not explainable via the Bohr model, for example the famous Double Slit Experiment.
The two models are at odds with each other and cannot both be true at the same time.
Stefan is therefore accurate in this respect.
The ‘ether’
Stefan uses the term ‘ether’ to denote the fundamental substance of the Universe. This is an unfortunate choice of words as it refers to an earlier formulation of physics whereby the whole of the material universe sat inside the etheric substance which provided an external framework, a reference point to define time and distance.
The existence of an ether as separate from material reality has itself never been proven and adds an unnecessary dualism to reality a division between two types of ‘substrate’. Far better to envisage the whole universe as consisting of a single ‘substance’ conforming to a single set of laws which is what I think Stefan is trying to say.
Pi water and elemental transmutation
Stefan has also referred to something called Pi water as the fundamental substance of Life and claimed (after Dr. Peter Augustin) that all substances emerge from this substrate.
Again, a seemingly outrageous claim if we accept the Bohr model of the atom where matter is conserved, never destroyed and never created outside of a Big Bang.
However, the experiments of Louis Kervran and others give very strong empirical evidence that elements can be transmuted from one to another within biological systems and even that matter can be created and destroyed in synchrony with lunar cycles. See: Transmutation
Somehow, electromagnetic vortices in intracellular water accumulate enough energy to change an atom from one element to another. Konstantin Meyl has theorised that additional absorption of solar neutrinos can accumulate sufficient energy to actually create electrons within living cells.
The fundamental substance of the Universe though is not ether or Pi water but an electromagnetic field from which water itself is an emergent substance. Transmutation is achieved, not from the water, but via energy accumulated and transduced by the bio-field itself.
Spirit identified as the bio-field
This substance integrates spirit because it is the building, energy and information substance of life. In academic biology and medicine, the assumption of spirit is excluded.
An electromagnetic bio-field permeates all living systems and what appears to be ‘matter’ is really an illusion created by highly stable vortex structures within the field. It does not need to be integrated as there is nothing to integrate into; all is a unity and all that exists is the field.
This field fulfils all the criteria of what Lanka terms ‘spirit’ and is indeed largely dismissed as a source of either information or energy by academic biology, being relegated to the status of a mere power source or waste disposal unit.
In fact the bio-field (spirit) is the progenitor in all biological activity from metabolic regulation to consciousness, it is the primal source of all energy and organisation.
Our organs are organized in interconnected tissues (w+ 1/2/3-2019) and not in cells. The cell theory has never been proven, always refuted and derived from the atomic theory.
The diagram below shows the vortex structure at the surface of the sun. A living being is much smaller but the laws of electrodynamics are the same and so we may suppose that a similar arrangement is present in the bio-field of the body.
An overall toroidal electromagnetic field fragments into smaller vortices which self-organise into an energetic cellular structure. Matter accumulates at ‘hot’ points and tissue is formed in a regular pattern resembling cells.
Many researchers (e.g. Robert Becker) describe electric fields in living systems and others (Nick Lane) describe circular electric ‘currents’ resembling vortex structures. Many others describe a sharp electric gradient at cell boundaries.
So whether or not a ‘cell’ exists as described, the bio-field itself necessarily has a cellular structure arising from its vortex nature. This structure is reflected in the material substance of the tissue and leads to the impression of separate physical cells.
Lanka has stated that the nucleus of a cell is ‘free to move’ within the tissue. However, the nuclei will tend to adopt a certain spatial ordering whilst rotating slowly. This is entirely consistent with the existence of an energetic vortex structure with the nucleus at the centre and which maintains separation and rotation of such nuclei.
Living tissue has an electrically cellular structure.
Tissue repair
There is a claim (possibly from Stefan) that if a finger is cut or an apple is broken then immediately some sort of bi-layer is created and that this has been interpreted as a cell membrane.
This is very credible given the existence of a morphogenic vortex field.
Any discontinuity in tissue entails a potential discontinuity in the supervening vortex structure. However the vortex is tied to the laws of physics and will persist in some form or another; the rotational energy must complete its circuit somehow.
A cut or break then introduces an altered energy structure at the new surface and an altered energy structure means modified biological activity. New tissue is assembled almost instantly according to the laws of electromagnetism acting directly upon existing tissue. A new membrane has formed and a healing process has begun.
Exosomes
Within the cell theory (refuted), the disintegration of isolated tissue lumps, which are interpreted as cells, but Human/animal excretions containing connective tissue are also interpreted as “exosomes”.
What is meant by ‘disintegration’ in biological systems?
To answer this we need to understand what it is that holds together the tissues in the first place.
Each cell is a vortex structure with a negative electrical field moving around the periphery then there necessarily exists a magnetic dipole with North-South polarity along the axis of electrical rotation. It is this arrangement that holds the cells together, with the magnetic forces pulling the tissue together and the electrical forces maintaining separation.
All energy fields are to some extent ‘lossy’ and so eventually the electromagnetic forces will tend to weaken and the tissue will literally fall apart.
The laws of physics still hold, however, and so new configurations of matter are adopted and still according to some vortex law. We should expect, therefore to see new ‘cellular’ shapes begin to appear with even tiny vortex satellites surrounding them.
There are claims that these exosomes are somehow helping the organism survive by transporting energy and other resources from one place to another. Possibly, but whatever the function, they are created from a deteriorating bio-field and will behave according to such the laws of such an environment.
Lack of energy has caused tissue to disintegrate and the debris has adopted a new ‘least-energy’ state according to its new environment. Circular membranes are therefore in abundance, with what look like new cells appearing solely from the properties of membranous tissue imbued with electromagnetic vortex energy.
These artefacts are separated from their bio-field and are merely adopting new forms as dictated by the laws of physics. There is not necessarily any biological ‘meaning’ in any of these shapes.
Intracellular water
Gilbert N. Ling – the fluid in the “cells” is not water
In the interior of a cell we have a substance that is denser, more viscose than water with a gel-like consistency and somehow organised, energetic and ‘alive’.
Many researchers have tried to describe this substance as: ‘in an excited kinetic state’, ‘quantum coherent’, ‘fourth phase water’ or ‘full of de-localised electrons’ to choose just a few examples.
The properties of such a gel seem at odds with a classical description of water and nobody has been able to explain them in terms of molecular structure.
It would seem that Ling is somewhat justified in claiming it to be other than water and Lanka correct to reject conventional atomic theory as foundation for describing intracellular gel.
The physics of Konstantin Meyl, however, gives a richer model for molecular structures that seems more in tune with the needs of biology as a whole. Electrons in this model are stable electromagnetic field vortices which have spin in the electric domain and therefore form a magnetic dipole. See: The atom
The properties of intracellular water are likely explained by the forces arising from these fields. Magnetic forces pull the molecules together and lead to some sort of organisation and alignment whilst electric forces maintain separation between molecules.
The cell is kept in a state of high energy by the body and this leads to close packing of molecules which in turn creates higher density. Viscosity arises from ‘field drag’ and stronger fields will lead to higher viscosity to the point where the consistency turns to a gel.
No elemental transmutation is needed here, just a higher level of ‘free energy’ organised as a nested vortex structure.
Vortex alignment
The cover of the book: The Rainbow and the Worm by Mae-Wan Ho, shows various living organisms photographed with polarised light. Macro sized areas are transmitting a single wavelength of light which means that the constituent molecules are forming some sort of filter.
Ho interprets this as meaning that all the atoms are aligned in the same direction and for her this means some sort of quantum ‘coherence’.
An alternative explanation might be simply that the magnetic forces arising from the vortex structure in living bio-fields has brought all the intracellular water molecules into magnetic alignment.
No relationship between microscopy images and in vivo structures
A motionless electron microscopy image never reveals a living biological process. What is observed under electron microscopy has absolutely nothing to do with what happens in the human biological organism. Any result from the laboratory can provide absolutely no insight into the processes within a living organism.
Activities and morphologies in both living systems and microscopy environments both obey the same laws of physics but are subject to different bio-field organisation.
When transferred from a living system to a microscope slide, molecular collectives (‘organelles’) will break down and reassemble almost instantaneously in accordance with powerful magnetic forces and the general ‘cellular’ appearance of the ensuing shapes will give the impression of some sort of meaningful biological structures. This is an illusion.
Hypothesis: Living systems are controlled by an electromagnetic bio-field that is responsible for all biological organisation of information, energy and matter. This field takes on the form of energetic vortices which flow through the various conduits provided by the host organism.
The idea can be extended to all natural systems throughout the cosmos and the mechanism can be seen at work in the formation of stars and galaxies, the properties of water, the patterns of weather systems on Earth, the influence of such systems on biological rhythms and even in the induction of disease.
A bio-field regulates at the level of a whole organism and directs energy in a nested vortex system inwards to the organs and thence to the heart of every cell in the body. Even within a cell, energy is again driven inwards towards the nucleus and local vortices are formed around the hexagonal rings of bio-molecules where they act as energy accumulators and transponders at the molecular level.
Vortices form in the insulating myelin sheath around nerve fibres, enabling efficient transmission of arbitrarily large quantities of information at close to the speed of light with minimal loss or corruption. The brain is a series of nested electromagnetic vortices.
A bio-field complex is responsible for the inheritance of phenotype and even of acquired characteristics. Such vortex fields can absorb energy from external sources (heat, Gibbs energy, solar neutrinos, atmospheric discharge) and no doubt were instrumental in the formation of the first living systems.
An energy vortex will ‘want’ to travel and will find a path of least resistance whether it be in space, bio-systems or electrical circuitry. If energy is produced by a chemical reaction, for example in a simple battery, and then presented with an insulated wire, then the conditions are right for the production of an electric current and energy will move from one place to another as directed by the conductivity of the local environment.
Concentration vs dissipation
The idea of an energetic vortex flow together with an ‘accumulation principle’ is in stark contrast to the default world view of essentially dissipative processes which somehow accumulate sufficient energy, information and stability to first create, maintain and then reproduce, a biological organism.
It is worth comparing the two frameworks in general terms and asking which is more propitious for the formation and continuance of ‘life’.
Radial dissipation, big bangs and randomness
We are told that the world began with a Big Bang and that on average all matter is expanding outwards, all the time losing ‘order’, increasing in entropy (disorder) and heading towards an inevitable ‘heat death’.
The main process opposing this is that of gravity which is a simple centripetal force, drawing everything towards a central point. This may be instrumental in the formation of simple spheres in the form of stars and planets this is clearly not sufficient to produce a living organism.
Chemical reactions occur, sometimes driven by ‘heat energy’ but still on an energetically and informationally ‘downward’ slope.
Two molecules or atoms encounter each other by chance and maybe stick together if they happen to have enough energy to do so, but a random coupling is surely not conducive to the construction of a living being. Where did the energy come from to achieve the coupling? Did it accumulate by ‘chance’ again?
In the case where a reaction releases energy, that energy is either radiated outwards as photons or dispersed outwards in the form of ‘heat’. Both processes are dissipative, thermodynamically ‘downhill’ and anathema to the creation and maintenance of an organism that is often said to be ‘far from thermodynamic equilibrium‘.
Somehow within this environment, ‘life’ began; but how?
We are asked to believe that in an environment of random vibrations of molecules and the radial emission of photons at the speed of light, that somehow life emerges; somehow global ‘organisation’ arises from random events with no informational template and no fundamental organisational principle; somehow energy accumulates as a result of processes whose main tendencies are to radiate and dissipate.
Vortex concentration
Consider, in contrast to the above, a default world view where energy has a propensity, not to radiate but instead to form vortex structures where there is a tendency to spiral inwards and to concentrate at some ‘vortex radius’, a small spherical volume of high intensity energy which can be utilised for chemical reactions and other biological necessities.
We immediately have accumulation, instead of dissipation as a fundamental property of the universe, a basis upon which other processes can be built.
One function of the vortex is to serve as an energy accumulator, absorbing energy up to a critical threshold before releasing it in a pulsatile fashion. Another is to assemble molecules, to draw them together and even align them in preparation for an ensuing enzyme reaction fuelled by the energy from the vortex itself.
In addition, a vortex structure will create a field gradient from centre to periphery, providing a variety of environments within which bio-chemical reactions can occur.
As energy spirals inwards, further modulations of the field take place according to local conditions with further concomitant refinements of structure according to the laws of physics. Both energy and information (scalar wave structure) are continually harvested from the electromagnetic environment and are interpreted, sequestered, released and utilised in a way which is determined by existing physical structures.
We already have a system that satisfies a broad definition of ‘life’
The ring vortex
The field is electromagnetic in nature and obeys a set of differential equations formulated by Konstantin Meyl which are really just a tidied up version of the familiar Maxwell-Heaviside equations.
Electric and magnetic components of the field are in a continual state of movement (no static fields) and those movements always at right angles to each other in accordance with the observed laws of Fleming and Faraday.
Given these constraints, the field has a strong tendency to form stable vortex-like structures of various configurations.
Shown here is a ring vortex with electrical field movement shown in pink and an associated magnetic field in yellow. The magnetic field forms a de facto north-south dipole and the electrical component will allow for self-propulsion of the structure under propitious field conditions.
Other patterns such as helical formations are feasible but the ring structure shown is sufficient to explain many observed phenomena.
The magnetic dipole structure is made explicit in this diagram and occurs in a variety of situations. The electric field shown in green provides ‘electrostatic’ repulsion, keeping elements apart from each other, whilst the magnetic dipole in pink helps to attract, organise and align such structures.
This short video shows the development of the nervous system of a zebra fish. A ring vortex accumulates energy from the environment and this is used to either assemble existing matter, or to create it from scratch via biological transmutation before organising it into nervous tissue.
Ring vortices can almost be seen at the developing tip of each nerve. Ask where the energy comes from to sustain this activity and how the development s directed.
The vortex structure sucks in energy from heat, kinetic motion, Gibbs energy and possibly from the solar neutrino stream, all to be concentrated at the ring itself for developmental purposes. Orientation is achieved by the influence of an ambient magnetic field acting upon the dipole structure of the vortex itself; the rings are guided along the correct path by a ‘morphogenic’ field.
Once complete, the neurons will serve as conduits for similar vortices carrying both energy and information around the organism with a high degree of efficiency. The myelin sheath, being an electrical insulator is ideal for the formation of the magnetic component of the vortex and indeed it has been found that the speed of propagation increases precisely when this sheath is thicker. See: Scalar waves and nerves
Cellular organisation
Electromagnetic vortex fields will have a tendency to self-organise into a variety of structures, one of which is a tightly packed cellular structure with an assortment of associated magnetic and electric vortices.
The image below shows an arrangement of such structures found at the surface of the sun. Now clearly biological organisms are much smaller than the sun but the laws of electromagnetism do not make exceptions for scale and are in force at every point in the universe.
We can consider therefore that the cellular structure of a biological system is organised in the first instance by electromagnetic fields and thereafter maintained by the same fields which have been fixed in place by the production of physical matter as with the ring vortices and nerves.
Development and function seem inevitably linked by processes such as this. An early vortex forms an ‘ideal’ shape according to the laws of electromagnetism and then physical matter develops from the vortex energy. The form is then somewhat modulated by the laws of material physics such as fluid pressure and membrane tension etc. to assume a slightly different shape that will sit comfortably within the existing cellular ensemble. Thereafter the vortex field itself is guided by the physical body and performs the duties of energy transfer, information transmission and morphological maintenance.
The origins of Life
There is some evidence to suggest that conditions on early Earth were considerably more electromagnetically active than today and so we can imagine the existence of vortex patterns similar to those of the solar surface (pictured above).
Somewhere in the primordial soup, then, electromagnetic vortices form and stabilise into a cellular ensemble which maintains a constant throughput of energy which may last several millennia. Conditions are stable and varied enough to host the beginnings of pre-biotic ‘life’. Energy is accumulated, matter is concentrated and the first bio-molecules form under this environment.
A common idea is that biological cells are ‘irreducibly complex’ and that a cell is the sum of all the constituent bio-molecules whose creation must precede the creation of the cell. This is a crippling thought and at odds with what is observed.
Bio-molecules in daily life are a product of the cell and not the other way around; the cellular organisation precedes the production of bio-molecules.
Complex molecules emerge from the cell rather than the cell emerging from them. In the vortex scheme described, the cellular structure arises first as a consequence of the laws of physics independently of any physical matter and is followed by the creation of such matter from the intense energies and specific electromagnetic structures present.
Moreover, bio-molecular activity is mediated, not by the molecules themselves, but by the attendant electromagnetic field which gave rise to them in the first place. Development and function are again linked, with the physical form being a concretisation of the original bio-field.
Energy transport
Energy is transported around the organism by a variety of means:
Ring vortices – carry energy from one place to another
Vortex transfer – energy can be transferred from one vortex to another
Heat transfer – this is a form of vortex transfer
Gibbs energy (free energy) – assumed to be thermodynamic in nature but more likely to be organised vortex transfer
Electro-acoustic vibrations – another manifestation of vortex transfer
To get a sense of the behaviour of ring vortices, watch videos of water vortices, smoke rings or plasma rings. Energy is packed into a small volume and moves from one point to another with losses kept to a minimum. The amount of energy transported being somewhat independent of the size of vortex or speed of movement.
Ring vortices can merge together or bifurcate. They will appear wherever the conductive environment is suitable. They can transport energy along existing conduits such as nerves and will create temporary conduits (e.g. microtubules) where necessary, leaving them to be dismantled after use.
Gibbs energy
Gibbs energy or ‘free’ energy is assumed to be thermodynamic and hence dissipative in nature, but at the same time responsible for all manner of reactions which surely require precise accumulation of energy at specific points in the cell.
A better way to think about free energy then is to imagine an environment dominated by a complex vortex structure similar to the solar surface (Fig. 1) where energy is free to move between the vortices in a manner similar to that of a flowing river.
The energy is ‘free’ but organised, it will flow with the vortex structure and will tend to attain some state of dynamic equilibrium. A deficit of energy in one place will soon be remedied as energy flows in from somewhere else but the overall vortex structure will be maintained. Half of the work of energy regulation within living systems is already accomplished at the fundamental base level of physical reality.
The laws of electrodynamics, as opposed to the fantasy of thermodynamics, will prevail and there is an organisation and accumulation of energy as opposed to dissipation and disorder.
ADP/ATP
Prof. Konstantin Meyl presents a good argument to the effect that the rotation of the phosphate groups in ADP is powered by electrical vortex energy. The vortices are present in inhaled oxygen, enter the bloodstream via the lungs and energise the ADP therein.
The ADP travels through the arteries to the capillaries and into the lungs where it is released to provide energy for the mitochondria. There is no need for any gaseous transfer to take place across the lining of the lungs. See: Do we breathe oxygen?
Hexagonal ringmolecules
Again from Konstantin Meyl, comes the idea that the hexagonal structures found on many bio-molecules (chlorophyll) can act as field-energy accumulators. Vortex energy spirals around and is captured by the ring structure to form a strong ring vortex that moves with the molecule.
The vortex will have both electrical and magnetic components, allowing for a variety of possible behaviours.
For bio-chemistry to function as observed, we require some sort of mechanisms to assemble and align molecules, to accumulate energy and to release it as required for reactions to take place.
According to Meyl, there is not sufficient energy in an ultraviolet photon to do what is claimed but what happens instead is that energy accumulates around the ‘head’ of the chlorophyll molecule until some threshold is reached, whereupon it is released and travels to where it is needed. The transport mechanism is so efficient that physicists have assumed some sort of quantum-superconductivity to explain it but it seems that energy transport via ring vortices might be sufficient.
Enzyme reactions
Enzyme reactions are puzzling. Sometimes they react and sometimes they don’t. Reactions in a laboratory are different from reactions in vivo. The application of heat seems to speed up reactions. Sometimes acidity affects the reaction rate.
Hypothesised mechanisms include receptors, binding sites and catalysts but no description of their mechanisms is given in terms of any sort of fundamental laws; what is a receptor made of for example?
For two molecules to bind together some ‘long range’ attractive force is necessary to draw them near to each other along with sufficient energy to overcome some sort of barrier of ‘potential’.
Mainstream theory of kinetic gases has molecules bumping into each other to supply the movement and energy but this doesn’t explain all the effects seen. Van der Waals forces arise from the fixed properties of the atomic structure and should not be varying with the environment.
Hypothesis: Bio-molecules contain hexagonal ring structures which promote the construction of electromagnetic ring vortices. These act both as energy accumulators and magnetic dipoles and add an extra layer of complexity to molecular interaction.
Magnetic forces exert long range attraction, pulling molecules together and orienting them correctly. The potential barrier is overcome and the whole arrangement settles to a new, stable, low-energy state. There is likely some surplus energy now and this simply diffuses away into the general vortex matrix as ‘heat’ or maybe transduces to infrared light.
The application of heat to the system is a way of adding energy to these ring vortices and will speed up reactions in general. A catalyst is a way of introducing both extra energy and additional organisational forces into the reaction. Energy accumulates on the catalyst and is used for the reaction but the molecule stays intact. The catalyst is not physically destroyed but is now a bit low on energy. It will, however, continue to accumulate energy in order to to re-fuel for the next interaction.
Rates of enzyme and other reactions seem to vary considerably with season, lunar cycles and eclipses as recorded by Simon Shnoll and Giorgio Piccardi. These are a hitherto unexplained phenomena.
Energy accumulation is in part from vortex transfer (heat), in part from infra-red absorption and in part from the solar neutrino stream. Neutrino density increases by a huge factor during eclipses and so the effects seen by Shnoll and Piccardi are now to be commonly expected. Stirring a solution is merely a way of adding extra vortex energy by kinetic means.
If the body or cell can control energy input to the reaction then the speed and possibly the ‘nature’ of the reaction can be controlled on a highly localised basis.
This scheme adds an extra layer of complexity to the Van der Waals forces that is actually independent of such forces, decoupled from the atomic structure and whose strength varies over time according to both ambient conditions and cosmic cycles. The addition of magnetic dipoles seems to be an adequate explanation for the mechanism of the various receptors, inhibitors etc.
Protein construction
Proteins are complex molecules with well defined functions in biological systems. Construction is said to be via gene expression and once constructed the completed molecule needs to be folded precisely or else the whole chain is ‘dismantled’ and the whole process starts over. Initial creation is said to be impossible by ‘chance’ thereby giving encouragement to the intelligent design lobby. Some proteins only have a lifespan of about 10 minutes before, again, they are ‘dismantled’.
So many unanswered questions here.
Assume that a protein starts off as some sort of ‘seed’ whether it be a physical molecule or an electrical eddy current (field vortex). Energy spirals inwards from the ambient electric field and adds to the vortex. Amino acids are sucked in or created on the spot from vortex energy. The whole molecule is assembled via the laws of physics, the precise nature of the vortex and the specific mix of ingredients in the local environment.
The completed molecule folds according to a least energy pathway and a complex field vortex forms at the centre. This vortex continues to accumulate energy and acts as a power source for various cellular processes.
The basic function of any bio-molecule is to transduce energy from the ambient vortex field into something that can be used by the other molecules. Energy is absorbed, accumulated, transduced and dissipated.
An incorrectly folded protein may absorb an indefinite amount of energy without sufficient dissipation and will therefore self-destruct. Proteins with short lifespans similarly do not need to be destroyed by the cell itself but will disintegrate when overloaded with energy. If these molecules are to be dismantled by external means then surely some sort of timer is required, meaning an additional complication, an additional mechanism to be explained.
The same may be true of some toxins; they simply continue to accumulate energy until the molecules or even atoms break down completely.
Bio-molecular evolution is hastened by self-selection, meaning that unsuitable molecular chains will self-destruct on the spot and any cellular environment that does not promote and appropriate energy flux will not survive to reproduce anything. There is no need for billions of years of randomness and selection and no process is truly random but always according to the laws of physics and within an environment of a continual flux of vortex energy.
A correctly formed protein will be able to dissipate energy at the same rate at which it is absorbed and it is up to the rest of the cell to make use of this energy in whatever form it is presented. Pulsed energy may be used in enzyme reactions. Enclosing vortex fields may be used for transport of other resources or assistance in maintaining ion gradients. Completed proteins may accumulate further energy and emit more complex structures to be interpreted as ‘information’.
Properties unexplained by molecular structure
An AI engine gives a list of phenomena that are not fully explained by the ordering of atoms within the molecule. They require something else, an electric field of some sort:
Protein folding
Enzyme activity
DNA replication
Delocalisation of electrons
Electrical conductivity
Light absorption
Binding of a drug to a receptor
Recognition of a substrate by an enzyme
Other molecule-specific interactions
Biological transmutation
Louis Kervran (right) performed many experiments showing that the mineral output of many living organisms did not match the input, leading to the inescapable conclusion that living beings are somehow able to transmute elements from one to another according to their own needs.
Chickens raised on land containing no calcium were able to grow, maintain a skeleton and lay eggs with hard shells. The chicks hatching from such eggs contained more calcium than was in the egg in the first place and suffered no health issues. Calcium has been manufactured from some other element.
Manual workers in the Sahara sweated out more potassium than they consumed but the amount was consistent with the volume of sodium ingested, thereby suggesting that they had transmuted elemental sodium into potassium. Energy was sequestered in the new molecule and excreted from the body thereby providing an additional cooling mechanism. Restricting sodium input led to heatstroke.
Whatever the details of the transmutation of elements, such a process is going to need considerable energy and, moreover, that energy must be carefully controlled and localised if it it not going to destroy a whole chicken.
The idea of an electromagnetic vortex fits the requirement (Meyl). Energy accumulates and localises at the centre of the vortex. This energy becomes highly concentrated at a small scale and when individual ions are drawn in to the whirlpool they become destabilised according to the high field strength, thus allowing the splitting apart or joining together of elements at the atomic level.
Blood flow
The book “The Heart and Circulation” by Branko Furst (right) summarises over 100 years of research into the nature of blood flow and concludes that the idea of a heart as a pressure pump is inconsistent with reality. The blood is not pushed around by the heart but instead moves with its own motivational force and according to the metabolic needs of the body.
Nobody has worked out how this happens or where the energy comes from so it is time to go back to the basics of physics and consider how the electromagnetic forces (there are no other) within the blood can somehow be utilised to provide sufficient kinetic energy to maintain a decent flow.
In a paper from Alexander Morozov, ATP and other biological substances were added to water and the solution placed into square channels of various dimensions. The water was seen to self-organise first into a collection of vortices as shown and second into a self-sustaining directional flow along the tube.
Now self-organisation is by the laws of electromagnetism, but there is still the need for a regular supply of energy. Suggestions for sources include:
Popular images show a toroidal electromagnetic field surrounding the body which is measurable for a distance of about five feet away from the body and is assumed to be created by, and emanate from, the heart and other energy centres such as the brain, liver etc. This is hard to verify but sounds ‘likely’.
An electromagnetic field is claimed to be produced by the action of the heart and makes its way largely unscathed through the highly charged mass of muscle and bone to somehow form a torus around the body. The field is so strong as to be measurable several feet away from the body and to be able to affect the heart rate of other people within the proximity.
The heart is already at a temperature close to which its proteins will denature but can cope with manual labour in Sahara heat without cooking and still generate enough spare energy to create such a field.
Alternative hypothesis: The observed external biofield is the organisation of already existing external energy which may radiate or may even spiral inwards towards the body. Energy moves inwards but information moves outwards. The internal bio-field is organised as a general toroidal vortex at all scales. Each cell hosts an electromagnetic vortex and generates its own electric field. Energy moves between the cells in the general pattern of a torus.
Energy can spiral outwards to release excess or can spiral inwards towards various vortex centres (Chakras) as a de facto power supply. Increased muscular exertion increases the energy production, increases energy supply towards the heart and also increases vortex transfer outwards as heat loss.
A field is measured outside of the body and is assumed to be radiating outwards from the heart in accordance with traditional beliefs regarding such fields, but the principles of vortex physics allow for different interpretations.
We live between the twin capacitor plates of the Earth’s surface and our ionosphere and as such are surrounded by a continuous stream of electrical discharge in the form of field vortices. These vortices have a tendency to self-organise into larger (or smaller) vortices and will respond to the presence of a human body the way a river might respond to a small pebble or a frond of weed.
The field surrounding the body may therefore be explained, not by the radiation of a generated bio-field but by the organisation of an existing field according to the presence of the body. This field may be ‘static’ but attached to the body or may actually spiral inwards towards the body, thereby providing an additional energy supply.
Once energy has entered the body it is subject to the highly organised conditions within the body but the general laws of physics still apply. We can envisage the energy flow within the body as comprising a general vortex pattern which moves inwards towards the ‘chakras’ whilst self-organising into a cellular structure within the tissues. Each cell maintains and is maintained by, its own vortex, with the nucleus at the centre. Within this structure forms smaller and smaller energy vortices right down to the scale of an electron, itself an electromagnetic vortex (Meyl – Scalar Waves..).
The heartbeat can be detected in the modulations of the external bio-field, giving the impression that the energy is being emitted from the heart but this is not necessarily the case. It is quite possible for the energy to be actually spiralling inwards towards the body whilst information ripples outwards, using the field itself as a ‘carrier’.
Watch a stable vortex in a stream. The water spirals inwards but toss a pebble near the centre and ripples (information) will still travel outwards, against the vortex flow.
Whatever the requirements of a biological field, it must nevertheless contend with the basic laws of electrodynamics and these necessitate dynamic electromagnetic vortex structures. Energy supply and regulation has vortex movement as its fundamental basis.
The vortex principle
The diagram below comes from the paper: “About vortex physics and vortex losses” from Konstantin Meyl and illustrates the structure of a typical vortex.
Think about a tornado in air or a whirlpool in water. Water spirals inwards to reach a maximum velocity at the vortex radius (shown here as a circle). This radius is clearly visible in the case of a tornado.
Outside the radius, the speed and energy diminish according to some approximate inverse square law shown here as a curve dependent upon ‘R’ (radius).
Konstantin Meyl: About Vortex Physics
Inside the vortex, the energy gradient is linear and again dependent upon radius. Water or air will spin and will want to spiral outwards according to centrifugal force but will be prevented from doing so by the inward spiralling matter.
When the centrifugal force is balanced precisely by the centripetal force a stable dynamic structure forms and is visible as the vortex radius. The velocity at the centre of the vortex is always precisely zero; there is no theoretical possibility here of an infinite singularity such as a big bang or black hole.
If energy could be extracted from the centre of the vortex then that would merely allow for more energy to enter from the outside and presumably the converse would be the case; additional energy would dissipate outwards and again a norm is restored. Strength and stability at the centre are maintained by means of the accretion and dissipation of an effectively inexhaustible energy supply made available to the system by means of centripetal accumulation.
We have an example then of what might be termed ‘order from chaos’. A geometric structure with a self-regulating energy system has been created purely from the laws of physics with no need for any other informational input. The structure is stable to perturbations and yet at the same time mutable and adaptable to environmental forces. This is a contradiction of the general ideas of ‘entropy’ put forth by mainstream science.
A well defined shape with a tendency to accumulate and stabilise energy into a functional gradient is used as the basis for larger self-organising forms i.e. ‘Life’.
The basic vortex above is given by Meyl but more complex structures are known to cosmologists in the form of Birkeland Currents which show multiple concentric layers with alternating clockwise and anti-clockwise flows. [D. Scott]
Scalar waves
Electromagnetic fields can take various forms. Of relevance to biological systems are the magnetic scalar waves as described by Konstantin Meyl and below.
First a reminder of the structure of a ring vortex. In the diagram below an electric field in pink circumnavigates the axis whilst a magnetic field in yellow forms a magnet-like structure with a North-South dipole pointing up and down.
The magnetic field movement here is greater than the electric and so this formation is favoured whenever the magnetic conductivity is more than the electric i.e. in electric insulators.
In the top diagram below, several such structures have aligned along the magnetic dipole field, have self-organised into an even spacing and have merged somewhat to form a longitudinal wave: a scalar wave.
The lower diagram shows how this wave may propagate inside a co-axial cable, a wire with insulating sheath or a nerve with myelin sheath. The ring propagates in the less conductive sheath surrounding the central core.
Konstantin Meyl: Scalar waves: A first Tesla physics textbook
Both energy and information are transmitted by this means, energy by the ‘potential’ of a scalar wave and information by some unknown modulation of its structure. The ring itself represents a potential difference that can be used as energy at the destination.
The regular spacing of the vortices create a de facto ‘frequency’ and the nodes of Ranvier separating the neural axons control the transmission of impulses to create an electromagnetic standing wave akin to a vibrating guitar string.
Transmission of information is now by modulation of a ‘static’ electromagnetic field structure. There is no need for a moving wave structure as with photons and no need for a stream of moving electrons as is assumed for electric currents. A carrier wave has been established but does not travel and transmission of information is not by frequency modulation.
A messaging system has been established where field movement is minimal, resistance is negligible and energy loss is almost zero; we have a kind of biological-informational super-conductivity.
Morphogenesis
The central problem of morphogenesis is how an organism attains its final form merely from the actions of molecules. This is a conundrum and remains so even if we add in all the remaining known laws of physics. Elements are attracted directly towards each other or repelled away from each other, energy is dissipated and entropy increases, but there is no sense of ‘form’, ‘construction’ or ‘stability’ apart from the basic arrangements of atoms and molecules.
The addition of the concept of a vortex makes a huge difference; we now have a basic shape in the form of a sphere or helix, we have an inward and regulated movement of matter and energy and the existence of constructive forces at the molecular level.
A general vortex field will self-assemble into a cellular collective and communication between cells causes the emergence of a local bio-field that can be further organised to create a final form. See: Bio-field emergence
The heart: Helical streams of blood in the heart are instrumental in forming the shape of the heart itself. If the spiral flow is interrupted, the heart will not form. [Lucitti et al]
Cells: Each cell hosts an electromagnetic vortex with the nucleus at the centre. Energy is accumulated until there is sufficient for reproduction to take place. The field at the periphery of a cell, where it meets another vortex has distinctive properties of its own (e.g. large field gradient) which initiate the formation of some sort of membrane.
Red blood cells: The RBC are the embodiment of a torus of electrical vortex flow; the energy field likely preceded the physical shape and acted as a template for its formation. Purcell et al
Vortices are said to form ideally in the proportions of the golden mean (Meyl) and red blood cells are in the same proportion in their healthy state. Deviations from this ideal lead to clumping, Rouleaux formation and impaired zeta potential. Purcell (2)
Nerves: See the zebra fish video above; the nerves develop from the ring vortices that they will eventually conduct.
Arteries: The blood circulates before the arteries emerge, arriving at some least energy route much the same as a river forms its own path to the sea. Thereafter, the flow of the blood forms an enclosing ring vortex and arterial tissue emerges to create the familiar tubular structure.
The brain: is an obvious double torus shape and toroidal fields have been described within.
Fingerprints: The whorls at the end of our fingers look like an emergent effect of some sort of vortex flow.
A physical being then is a refinement of a vortex collective, a teleological modulation of the emergent properties of a vortex field.
Sensory input
The sense of smell: Assumed to be the detection of chemicals in the air, but how does this work? How is molecular detection achieved and how is this converted to a nerve impulse to be transmitted to the brain? The sense of smell is by detection of field vortices (Meyl). Such vortices are produced by the scented material, fly through the air by field propagation and enter the nose. Nasal hairs act as antennae and convert the field disturbances to ring vortices which propagate along the hair to the olfactory nerve and proceed unmodulated to the brain for processing.
The sense of taste: This is similar to the sense of smell except that information enters the small hairs on the tongue (Meyl)
Vision: Photons enter the eye, morph to ring vortices and propagate along the rods and cones. They are filtered for frequency and collated at the optic nerve for further processing before moving along the nerve to the brain.
The binding problem
“The unity of consciousness and (cognitive) binding problem is the problem of how objects, background, and abstract or emotional features are combined into a single experience. The binding problem refers to the overall encoding of our brain circuits for the combination of decisions, actions, and perception.” – Wikipedia
Quite, how are experiences of fundamentally different categories merged together to make a single experience and what is an ‘experience’?
Statements above suggest that the sense of smell is just the input of scalar waves or ring vortices direct to the brain whilst visual impulses are similar structures but modified by the optic nerve. Proprioceptive impulses travel along nerves in the form of scalar waves whilst the geometry and electrical properties of the brain further suggest operation via toroidal electric fields.
Meyl states simply that “the brain is a scalar wave computer” and a stable toroidal ring vortex is surely a good candidate for memory storage, so we have both memory and computation performed by the same structure.
The binding problem is now simplified greatly. We no longer have fundamentally different physical categories of perception to merge together as all perceptual and cognitive information is now in the same format, namely a toroidal electric field complex.
The question is now merely “How do we amalgamate a bunch of ring vortices?”.
One simple answer is to simply push them together. They at least now have the property that such a thing is possible. Again, watch ring vortices in water and you can see them divide into two, merge together, pass through each other or sit side by side whilst maintaining independence from each other.
If olfactory impulses can be somehow labelled as such whilst travelling from the nose and likewise for the other senses then we can imagine that all sensory information can be held on a single vortex structure and interpreted in the brain unambiguously at a later stage.
A single vortex structure holds a single holistic impression and persists as a single memory. The physical vortex can be shrunk to an arbitrarily small size for storage and amplified back up later on for recall.
Defective interpretation (or maybe defective labelling) results in synaesthesia.
The morphology of fruit
Why are fruit the shaped the way that they are? To a large extent an apple, say, is just a bag that expands by filling up with water but that does not explain the presence or location of seeds or the wide variety of shape in other fruits.
The general principles of biological development seem to be:
The basic for development of form is the vortex
Vortices self-organise to form cellular clusters
Emergent properties of such clusters are controlled via a supervening bio-field
Energy is conducted along suitable conduits via ring vortices
In the case of an apple, these principles are easily apparent. Each cell is likely an electromagnetic vortex and these self-assemble into an overall spherical vortex to form the general shape of the apple.
The stalk of the apple is likely wet and conductive on the inside and drier and less conductive on the outside. This is a similar arrangement to an insulated wire or a myelin sheath of a nerve and is ideal for the conduction of ring vortices.
Energy is absorbed in the leaf via the ring molecules of chlorophyll and transmitted alone the conduits of the veins in the leaf in the form of ring vortices. Two such rings meeting at a confluence will easily merge to form a larger, more energetic ring which continues into the leaf stem and eventually to the woody material.
Some energy makes it to the trunk of the tree and is instrumental in raising the sap to heights hitherto unexplained by capillary action alone.
Some energy makes it through the stem of the fruit to enable the necessary production of sugars etc.
An overall vortex flow helps control the shape of the growing apple and some energy discontinuity tells it where to manufacture the tissue to form a skin. Other energy spirals inwards to concentrate at the centre of the apple where the flow breaks down into several smaller vortices to supply the energy and information required for the formation of the seeds.
A strawberry has a clear vortex structure at its centre. Energy is transmitted as a ring vortex along the stalk and then discharged from the cone-like vortex through visible filaments to supply individual seeds with energy.
Similar arguments apply to blackberries etc. where the fruit as a whole can be seen as an energy distribution system, concentrating energy via the vortex principle into the valuable seeds and thereby ensuring a new generation of plants.
As for oranges, compare Meyl’s drawing of the electron shell of Neon with the arrangements of segments in an orange. Electrons are the simplest form of field vortex and have arranged themselves in alternate polarity with clockwise spinning electrons nested between two with anti-clockwise spin.
An even number of electrons is mandatory for stability and with oranges we find that an even number of segments is preferred but not strictly necessary.
Konstantin Meyl
When things ‘go wrong’ with the formation of an orange, we do not see complete chaos but instead a cellular order is preserved. The basic laws of vortex physics are still in force and segmentation still occurs as a foundational phenomenon but has not been organised effectively by the supervening bio-field.
This is more evidence that morphogenesis is accomplished by a subtle ‘tweaking’ of the more basic properties of cellular structures i.e. those that arise out of simple emergence.
The emergent properties are robust and closely aligned to the Laws of Physics. However, they are organised by what might be termed subtle energies whose laws will likely remain a mystery for a long time, as the only effective way to decipher such forces is by observing their effect on the emergent properties of biological systems that they themselves were designed to organise. This is the only environment in which they may gain meaningful expression.
To study morphogenesis then, look for cellular organisation via vortices and study what happens when it goes wrong.
A general principle of biological organisation
The patterns mentioned above seem to be repeated again and again.
A supervening biofield acts, not directly upon the physical matter of the cells but instead on some other emergent field that arises from the self-organisation of the local cellular fields.
The cells themselves emerge from and are maintained by, the forces arising from electromagnetic vortices. It is these strong forces that interact with the biological matter to form physical bonds and tissues.
The fields organising such cells must themselves form an emergent biofield that presents a receptive interface or antenna to higher order fields thereby enabling a top-down organisation to take place.
Connection to the cosmos
We are regulated by electromagnetic vortex fields and we live between the twin capacitor plates of the Earth’s surface and the ionosphere It is therefore pertinent to ask as to the nature of the electric field between these plates. Conventional wisdom declares that a uniform field exists together with a slow steady discharge of electric current.
The image below, however, suggests otherwise. A capacitor has been set up and left to discharge for 40 hours. A circular pattern results, suggesting that the discharge is of a helical nature and that a vortex field exists between the plates. Yializis et al
Meyl: About vortex physics and vortex losses
Scientists mapping data from radio telescopes are starting to find huge electric ring vortices in the atmosphere with ‘footprints’ at the Earth’s surface.
The vortices are part of the Earth’s magnetic field and as such can be expected to follow the same patterns of latitude and seasonality and to respond to solar magnetic disturbances in some way as energy from the sun impacts our magnetosphere and is absorbed, modulated or even amplified by these structures.
Many scientists have found links between cosmic events and biological metrics but have been puzzled as to the mechanism, thinking that somehow the orbits of the moon, Saturn or even Mercury are somehow be affecting life on Earth by exerting a gravitational influence on our constituent atoms.
More likely it is electromagnetic field disturbances which propagate through space, are received by field vortices in our atmosphere acting as antennae and make their way into our regulatory systems.
Frank Brown found that all forms of life would apparently synchronise their activity to rhythmic events in the cosmos but could not work out the mechanism. Various inbuilt phase responders are somehow sensitised to the orbital movements of the planets, will ‘resonate’ in step and will then trigger innate behavioural patterns such as feeding or mating.
Electromagnetic fields were suspected, but Brown’s work seems to be largely neglected by the scientific community, presumably because the lack of a credible mechanism causes them to distrust the actual results. However, the assumption of a structured vortex field regulating the body together with recent discoveries concerning the Earth’s magnetic field now make such phenomena seem completely natural, with only the details to be worked out.
Similarly, Simon Shnoll, Giorgio Piccardi and others found that quantifiable processes in biology, chemistry and physics varied with planetary alignments and phases of the moon.
Such connections to the cosmos are not always beneficial, however..
Implications for health and disease
Many diseases, even heart attacks, show seasonal variations: Seasonal disease. The epidemiology of influenza in particular has been well studied and found to demonstrate strong patterns associated with season, latitude and sudden changes in temperature, humidity and pressure.
This is a strong indication that the Earth’s magnetic field is somehow responsible for influencing the bio-field of the body and thereby contributing to the altered regulatory state that is described as ‘influenza’. See: Influenza and weather
When viewed from the perspective of electric fields, there is no clear separation between the bio-field of a human and that of the surrounding cosmos. Energy and ‘information’ travel seamlessly from the solar surface to the Earth’s magnetosphere and thence to individual organisms via a variety of energetic filaments and vortices.
The activity of such vortices shows stable seasonal and latitudinal patterns that are modulated by local weather events and as a consequence, disease appears in the population at a time and place that is somewhat predictable from meteorological data.
Researchers from NASA found that the appearance of influenza in each state coincided with precise changes in humidity (Serman et al) whilst researchers in India noted a coincidence with the onset of the rainy season (Parvaiz et al) and those in Myanmar found similar associations between dengue and the onset of the monsoon, (Zaw et al).
The influences seem to have little in common but all are expected as a pressure front approaches. Such phenomena are associated with changes in pressure, wind direction, helical updrafts of air and presumably the formation of electromagnetic field currents.
Electromagnetic vortices were set up in some metallic micro-discs and exposed to electromagnetic vibrations. State changes were observed, i.e. measurable changes in an electric field were induced by the application of another electromagnetic field.
So we now have a potentially useful way of measuring certain aspects of electric fields that may not be available to a traditional antenna. Set up an array of these vortices and see if we can measure fine modulations of the atmospheric discharge.
The array is calibrated to be hypersensitive to certain target frequencies but robust to the measurement frequencies. Vortices are set up close to some critical state and micro-changes in the ambient field will cause a sudden phase shift thereby amplifying the signal. Field modulations of arbitrary sensitivity may be set up depending upon the technology used.
Now if such mechanisms are in place in living systems, we have a biological antenna connecting the bio-filed with the cosmos with the capacity to detect arbitrarily weak signals and to amplify them to something meaningful.
Response strength of individual vortices is decoupled from input intensity to some degree by the critical phase shift, but a continuum of response may be available as an emergent statistical property. There is no need here for magnetite particles or similar to effect signal reception as the vortex field itself is the antenna.
The existence of such vortex fields may well be reflected in the physical matter, meaning there may be physical organelles set up which act as receivers, but we will have to know what to look for and how to measure.
The vortex is the transducer and is powered by an inward spiralling of the Gibbs field. Reception is via ‘vortex resonance’ which allows the filtering of selected frequencies.
The idea of magnetoreception by magnetic particles is problematic. A certain strength of signal will be required to move a molecule to a sufficient degree thereby imposing immediate limitations on what can be detected, and what happens then? A particle moves and induces a small (attenuated) change in the surrounding field (even mechanical waves here are really electro-acoustic) and then what? We are back to trying to detect the resulting field changes and now need some sort of antenna to measure them. We are back to square one!
Best to go straight for bio-field modulation and then try to work out the fine structure of such field.
Inheritance
Certainly some information is passed from father to child and so there is a requirement for a transport format for such information.
An electromagnetic ring vortex would seem to fit the bill. The basic structure is highly stable and energetically persistent and scalable. There is a simple method available to merge information from each parent, which is to simply merge the respective vortices. See: Evolution and Inheritance
The phenomenon of Telegony shows that information can be passed without DNA as a vehicle.
The exact encoding scheme of such information is not known but if we reject DNA as a format then we are not now limited to a few giga-bytes of data. There is no minimal quantum of information in electric fields and so a ring vortex can theoretically carry an arbitrarily large amount of analog information.
Summary
A hypothesis has been presented that an energetic bio-field is responsible for the organisation and regulation of many, if not most, biological processes and that this bio-field is in the form of electromagnetic vortices.
The theoretical existence of such vortices is here merely assumed but adequate support can be found in the works of Professor Konstantin Meyl. Some evidence is presented here for the presence of such vortices in the Earth’s atmosphere and in laboratory experiments. (also Peng)
Vortex fields are not ‘directly’ measurable within biological systems using current scientific instruments beyond a crude representation as an electric current. However, existence of such a field is consistent with multiple observable phenomena which are currently unexplained by modern science and whose presence in many cases seems unlikely to be understood in terms of the interactions of molecules alone:
The general organisation of biological systems
Existence of suitable conduits for ring vortices
A requirement for centripetal movement of energy within living systems
Vortices in arterial blood flow
The emergence and self-organisation of cellular masses
A measurable bio-field external to the human body
The efficiency of energy transfer within biological systems
A video of the development of a nervous system from scratch
Obvious vortex patterns reflected in morphology
The hypothesised transmutation of elements from one to another
The recognition that some organisational principle must exist independently of the material it organises and prior to the act of that organisation. This is true for general maintenance, embryonic development and the actual origins of Life.
These phenomena seem adequately explained merely by the recognition of the vortex principle in electromagnetic fields. Aside from this there is no need for additional exotica such as quantum coherent domains, cold vortices, extra dimensions, quantum entanglement, randomly vibrating molecules, multiple universes or separate realms consiting entirley of ‘consciousness’.
There is no need for abstract definitions of disorder as ‘entropy’ or of order as ‘negentropy’ and no need for a formulation of information as separate from the rest of physical space. Indeed, Konstantin Meyl has stated: “Information is the structure of a scalar wave“.
We can look forward to a return to just Plain Old Physics as a way of understanding the physical universe.
Potential vortex, newly discovered properties of the electric field are fundamentally changing our view of the physical world – Konstantin Meyl https://www.meyl.eu/go/indexb830.html
Local modulation of neurofilament phosphorylation, axonal caliber, and slow axonal transport by myelinating Schwann cells – de Waegh, Brady https://pubmed.ncbi.nlm.nih.gov/1371237/
Intracardiac fluid forces are an essential epigenetic factor for embryonic cardiogenesis Authors: Jay R Hove 1, Reinhard W Köster, Arian S Forouhar, Gabriel Acevedo-Bolton, Scott E Fraser, Morteza Gharib https://pubmed.ncbi.nlm.nih.gov/12520305/
The chart below is from the Dissolving Illusions website and shows the decline of most so called ‘infectious’ diseases from 1840 to 1976. All diseases shown were almost extinct before the mass production of penicillin in 1944 and certainly before the first vaccines in 1957.
The vaccines cannot therefore be responsible for the abolition of these diseases, which begs the question: What is responsible?
The answer according to many people now is that these diseases were caused by some sort of poisoning and that improvements in hygiene, sanitation and workplace conditions are what led to the dramatic decline shown.
This cannot be the whole story though.
The mortality rates for scarlet fever in particular show, not a steady decline, but instead huge variations which suddenly settle down circa 1900.
These variations have two outstanding features:
Magnitude: They are of a greater amplitude than the overall average decline
Periodicity: They show clear and regular cycles
These variations are of an order of magnitude that is actually greater over a 3 year period than is achieved in a hundred year average decline. When the variations exceed the actual trend you have a problem!
What is the explanation then for these short term variations? Improvements in hygiene now seem very unlikely; how to explain a coordinated nationwide predisposition for hand-washing that comes and goes every few years? How to explain any influence that has such a cyclic nature?
Sunspot cycles and pandemics
The chart below from Nasirpour et al shows a striking correlation between many assumed infectious diseases and either high or low sunspot activity.
“Regarding the results of this study, we found that sunspots are the main cause of virus generation in the world. This research reveals that the biological and astrophysical mechanisms are related to the generation of world pandemics such as COVID-19.“
So although they still think that these diseases are caused by viral infection, the observed pattern itself is not caused by infectious spread but by the sunspot activity somehow.
Could sunspots cause disease?
First note that many of the disease outbreaks started before the sunspot maximum which tends to suggest that it is not the sunspots per se that are the cause of the outbreaks.
Mainstream wisdom is that sunspots originate from deep within the sun according to some internal process. In this case we may somehow be seeing the effects of this process at the Earth’s surface before they are visibly manifest on the Sun’s surface.
Other cosmologists see the solar cycles driven by external forces in the form of ‘galactic wave sheets’ or some such. Electromagnetic filaments between the Sun and the Earth are responsible for coupling events on Earth with those on the sun. Such filaments harness energy from the wider cosmos and propagate waves along the filament to the Sun at one end and the Earth at the other.
Sometimes effects are seen first at the Sun as solar flares and sometimes they appear as disease on Earth before the solar cycles peak.
These electric currents between the Sun and Earth will have an impact upon our weather and electrical discharge from the ionosphere will disturb the regulatory systems of our bodies thus leading to diseases of an inflammatory nature. See: Influenza and weather
Disease and magnetism
There are quite a few papers describing connections between ‘infectious’ diseases and changes in the Earth’s magnetic field. The assumption of a viral intermediary confuses the issue a bit but the correlations are always there and various mechanisms have been postulated.
This paper from Zaporozhan and Ponomarenko points the finger at altered gene expression and attempts to:
Bring attention to periodicity as a common feature of numerous biological processes and to discuss the nature of corresponding regulatory influences
Show theoretical possibility of bio-regulatory effects of magnetic fields
Outline some signalling pathways capable of implementing bio-regulatory (including genome-regulatory) functions of electromagnetic fields
Summarize our knowledge about Geomagnetic field, its principle parameters and sources of variation
Review possible evidences of regulatory influence of Solar cycles and corresponding Geomagnetic field perturbations on flu epidemic process
Describe probable mechanisms of Solar cycles and Geomagnetic field regulatory influences on virus-host interactions and other biological processes
Connections between the sun and Earth are quite surprising, with statistical correlations between the Earth’s rotation rate and solar activity deep within the sun.
“Not only the minimums of the Earth’s rotation show connections with the solar activity period, but also, as Currie (1973) showed, the rotation rate of the Earth actually correlates with the solar activity!” – Attila Grandpierre
Grandpierre notes that sometimes the change in solar activity comes first and at others it is the Earth’s variations that seem to initiate activity in the sun!
A better explanation surely is that energy accumulates in the solar filaments and propagates along the filament to cause correlated events in both Sun and Earth.
Whatever the origin of these phenomena, it seems to have the power to both cause sunspots and affect the rotational speed of the Earth; this is not a ‘subtle’ energy! From this perspective then, the idea that it could somehow be responsible for causing disease on Earth now seems a little less surprising.
The decline in disease
The chart again:
The peaks in the mortality rates for scarlet fever look to be about 5 and a half years apart, i.e. half a sunspot cycle. Now given the strong association between other diseases and sunspots, why should it not be that these cycles are also the result of electromagnetic disturbances?
Moreover, if such an explanation should be found sufficient for the larger variations in mortality then why is there any need of a separate explanation for the general decrease of mortality rates over the century? Something about the cosmos has settled down over the last century or so and the health of humanity has improved as a consequence.
The idea that the observed decline is largely to do with ‘space weather’ will seem like nonsense to many, but if it is supported by the data then it must at least be considered plausible.
Revealing the relationship between solar activity and COVID-19 and forecasting of possible future viruses using multi-step autoregression (MSAR) – Nasirpour et al https://pmc.ncbi.nlm.nih.gov/articles/PMC7961325/
Mechanisms of Geomagnetic Field Influence on Gene Expression Using Influenza as a Model System: Basics of Physical Epidemiology – Valeriy Zaporozhan, Andriy Ponomarenko https://www.mdpi.com/1660-4601/7/3/938
The evolution of species appears to be rapid, discontinuous and somehow directed towards survival within specific environments. Goal oriented behaviour implies adaptive biological feedback systems with specific aims, i.e. ‘cognition’; the organism actively participates in its own evolutionary development.
Interaction with the environment results in a new developmental goal for the next generation, a modification of phenotype or maybe a novel behavioural pattern. The entire template for the next generation is packaged up into an electromagnetic field complex and installed in the developing embryo.
Biological growth is teleological in nature with a conceptually fixed endpoint arising from apparently self-organising randomness. The appearance of randomness is purely superficial, however, with the actual reality being that a new bauplan is implemented with great accuracy at ‘run-time’ via a closed loop feedback system arising from the above mentioned bio-field.
Further activity as an adult generates further responses which then inform the whole reproductive cycle until some happy balance is achieved and the species stabilises.
Evolutionary processes are therefore not in any way random but, like other biological processes, exhibit the goal oriented behaviour and top-down causality of a fully developed cognitive system.
We need to describe some foundational ideas and to present some evidence for this.
Scientists here conditioned mice to be afraid of a specific smell and found that their children exhibited a measurable fear response to the same odour.
Parental olfactory experience influences behaviour and neural structure in subsequent generations – Dias, Ressler
“Using olfactory molecular specificity, we examined the inheritance of parental traumatic exposure, a phenomenon that has been frequently observed, but not understood. We subjected F0 mice to odour fear conditioning before conception and found that subsequently conceived F1 and F2 generations had an increased behavioural sensitivity to the F0-conditioned odour, but not to other odours.“
So what has been inherited is:
Recognition of a novel smell
A specific and complex response associated with that recognition
A whole ‘cognitive’ pattern has been passed from one generation to another. The response of ‘fear’ has made the odour seem relevant to survival and therefore important for evolutionary development.
The odour itself is not a direct cause of the response, this is a creation of the cognitive system in response to an otherwise harmless trigger. ‘Cognition’ is involved in inheritance.
Independence of cognition and ‘matter‘
Retention of Memory through Metamorphosis: Can a Moth Remember What It Learned As a Caterpillar? – Blackiston, Casey, Weiss
This time caterpillars were trained to odour aversion and the resulting moths retained the both memory of the smell and an associated behavioural pattern whereby they would walk so as to distance themselves from the offending stimulus.
What is it exactly that has persisted throughout the biological changes?
Very little of the physical organisation of the neurons survives the metamorphic process and in addition, the physical aspect of the behavioural responses is different in each case. The larvae will use a completely different set of motor skills to the final moth; they don’t even have the same number of legs.
So the thing that is inherited isn’t a simple set of chemical reactions and nerve impulses but rather a novel goal oriented behavioural pattern, a new teleological survival tactic of recognition, aversion and response. The end aim is ‘survival’ and this transcends the physical arrangement of molecules in the organism.
Development precedes function
If an evolutionary novelty is to be ‘selected’ in any way according to some measure of ‘fitness’ then this novelty must first be developed fully in a sufficient number of individuals for it to survive and propagate.
For example, if an opposable thumb is to be tested for practicality in the environment then a functional opposable thumb must first be developed and this development procedure must obey both the laws of physics and the laws of biology. This is not going to happen as a result of random mutations of anything.
It isn’t just a thumb that develops but a whole development plan within the embryo. This plan must be feasible with respect to the general laws of biology but also with respect to the existing developmental process and the implicit laws therein.
So a half finished thumb must be created before the whole is completed and the partial thumb must consist of a viable biological structure at every stage of embryonic development. It must have a consistent blood supply for example and must be capable of piecewise construction.
Darwinist arguments for evolution will furthermore require that an incomplete thumb not only arise from random mutation but also confer some selective advantage at every stage of evolutionary development. This is a big ‘ask’ indeed.
Neo-Darwinists tend to gloss over this aspect somewhat, describing the evolutionary process as ‘gradual’ or in terms of ‘small increments’. This doesn’t help at all as development must still precede function and all the idea of ‘development by increment increment’ does does is to increase the number of intermediate stages that must be selected for before the final advantageous product is completed.
They try to give the impression that development and selection are somehow concurrent and even claim that ‘selection drives development’, thereby inverting cause and effect in order to excuse the failings of the idea.
If evolutionary development has random processes at its heart then development and function are causally decoupled from each other, with the developmental process having no ‘knowledge’ of its final goal.
Development is a teleological process
The development of embryo from egg to adult is clearly a teleological process. There is a clear and largely predictable end point which is reached via apparently random movements of vibrating molecules.
We have a process which is demonstrates a high degree of stability of purpose even when subjected to subject to a continuous stream of perturbations. This sort of structure implies a fixed aim and a feedback system designed to achieve that aim. In engineering terms we have a closed-loop control system and in philosophical language we have teleology.
Teleology v. emergence
Neo-Darwinism takes a determinedly reductionist approach to science, imagining that life forms are somehow outcomes of the random mutation of DNA interpreted via the random vibrations of tiny molecules. Causality here is bottom-up; small meaningless effects, given enough time, are claimed to result in extremely complex biological organisation.
The recognition of the existence of feedback systems, control loops and teleological aims however allows for much more credible explanations for the whole of developmental and evolutionary processes. Emergent effects exist for sure but are utilised by the control system in pursuance of aims that lie outside of the physical distribution of the matter they are organising.
We have bottom-up emergence but top-down causation.
The seemingly ‘directed’ nature of both development and evolution are surely more easily understood by thinking in terms of higher order goal oriented processes than trying to calculate the sum total of a trillion vibrating atoms.
Evolutionary change arises from developmental innovation
The diagram below from Mae-Wan Ho shows a transformational tree of the possible patterns of fruit fly bodies which can be obtained by successive segmentation during development .
Transformation tree of body patterns in fruit fly larvae – Mae-Wan Ho
“(The diagram) is a transformational “tree” of the range of segmental patterns obtained during development. The main sequence, going up the trunk of the tree, is the normal transformational pathway, which progressively divides up the body into domains, ending up with 16 body segments of the normal larva. All the rest (with solid outlines) are transformations in which the process of dividing up the body has been arrested at different positions in the body. The patterns with dashed outlines are hypothetical forms, not yet observed, connecting actual transformations.
This transformational tree reveals how different forms are related to one another; how superficially similar forms are far apart on the tree, whereas forms that look most different are neighbours. It is the most parsimonious tree relating all the forms.
More importantly, the ontogenetic transformation tree predicts the possible forms that can be obtained in evolution (phylogeny), most likely by going up the sequence of successive bifurcations .. This is why phylogeny appears to recapitulate ontogeny (Gould, 1977), though actually it does not; ontogeny and phylogeny are simply related through the dynamics of the generic processes generating form.” – Mae-Wan Ho
The point here is that the observed phenotype is the result of a highly structured developmental process and minor evolutionary novelties are going to arise as end products of this process.
The evolution of phenotype is therefore going to reflect the evolutionary possibilities of the developmental tree. Evolutionary changes are not ‘random’, but result from changes arising from the developmental process itself and are subject to the emergent ‘laws’ of such a process.
Phylogeny and ontogeny
“Phylogeny is the representation of the evolutionary history and relationships between groups of organisms. The results are represented in a phylogenetic tree that provides a visual output of relationships based on shared or divergent physical and genetic characteristics.”
“Ontogeny is the origination and development of an organism usually from the time of fertilisation of the egg to adult.” – Wikipedia
“Ontogeny refers to the development of an organism while phylogeny refers to how the organisms have evolved.”
The idea that “ontology recapitulates phylogeny” comes from Ernst Haekel and supposes that the developmental stages of the embryo somehow follow the adult stages of an organism’s evolutionary ancestors.
This is a clear inversion of causality now as it is now obvious that evolutionary outcomes are the result of developmental processes and therefore must come after those processes and not before.
The segmentation tree though shows why the two processes, phylogeny and ontology, are so similar; because the one arises from the other.
A proposed model
The diagram shows a proposed model for the evolution of phenotype by a repeated alteration of the developmental goals of the organism.
Each modification to a teleological aim results in a phenotypic novelty which is tested against the environment for ‘fitness’ and a new adjustment is suggested to the next generation by the inheritance of such goals.
The children then inherit a new developmental aim which is then is executed as best as can be done with the current toolkit, with new strategies being developed as required and again passed on to the next generation.
All processes involved consist of closed loop feedback systems and are fully ‘cognitive’ with the ability to absorb, interpret and assimilate information of the relevant nature and to act upon such information so as to make intelligent decisions as to the setting of a goal for some other sub-system.
The system as a whole is organised as a hierarchy of largely autonomous modules which communicate via goal-setting, with the upper echelons setting the aims for the lower, more functional processes.
Evolutionary change therefore proceeds in a top-down fashion from the environment to the organism via the various cognitive systems in operation at the time. The idea that population stress drives evolutionary processes is now entirely appropriate as this is what is in fact happening.
This particular way of structuring a complex system is probably a good a definition of ‘life’ as will be found anywhere.
How the giraffe got its neck
Short necked giraffes were eating leaves from the lower branches of the trees but still yearning for the sweeter leaves higher up. They ‘know’ what to do, they form an intention to stretch their necks upwards, a teleological aim in accordance with the reality of their physique and their proprioceptive system, and they make the requisite movement.
All this is planned and executed by a high level control system complete with feedback and real-time adjustment. To describe this in terms of the movement of molecules is clearly a waste of time; we have teleology; we have cognition.
The stretching movement comes with a reward and this behavioural pattern is passed on to the next generation as ‘innate’ behaviour; an ‘instinct’.
In times of drought, the giraffe will still yearn for higher leaves and a longer neck. This yearning together with the experience of urgency resulting from stress and the urge to survive is sufficient for an intention for a longer neck to be formed and passed onto the next generation.
A new phenotypic target has been set and will be inherited by the children. This target comes easily from the giraffe’s inherent knowledge of its own physical shape and the possibilities of stretching. It has nothing to do with the developmental process and nothing to do with transcription of DNA or the manufacture of proteins.
The new goal is adopted by the developmental processes of the child and it is these procedures that are now responsible for achieving the required end-point. This is after all what the developmental system is good at.
A modified giraffe is born and the new phenotype is assessed in the field for ‘fitness’; new information is acquired and again passed on to the next generation in a continuous feedback loop that will in due course either stabilise to the environmental conditions or revert to ‘breed average’.
If adaptation is successful then other giraffes will soon notice what is going on and will simply copy the new phenotype. If we agree that all these processes are fully cognitive then these assertions are no longer outrageous or even unusual but natural corollaries of the main thesis.
Similar ideas were explained by Mae-Wan Ho in her paper but without explicit use of the idea that specifically cognitive processes are at play.
Here is her diagram showing feedback of information to and from the environment. Such information is interpreted via the metabolic and epigenetic nets to be codified into DNA as the inherited substance and passed on to the next generation.
From Mae-Wan Ho
To include DNA in the evolutionary process is an unnecessary distraction (The DNA delusion) but even so, the idea of feedback and top-down information flow is seen as a necessary requirement to explain the observations.
Extracts from the paper:
Almost 35 years ago, Ho and Saunders (1979) proposed the then outrageous idea that the intrinsic dynamics of developmental processes are the source of non-random variations that directs evolutionary change in the face of new environmental challenges; and the resulting evolutionary novelties are reinforced in successive generations through epigenetic mechanisms, independently of natural selection.
The non-random variations are teleological in nature and arise from intelligent modifications of the developmental ‘goals’. No natural selection is involved.
There is no need to change the developmental process as such; the changes occur naturally as the teleological aims change. This simplifies the process greatly by decoupling the evolutionary aims from the details of embryonic development.
We showed that the same (non-random) developmental changes are repeatedly produced by specific environmental stimuli.
Because the organism makes the same intelligent decisions each time. Moreover, each organism makes the same decisions thereby allowing a whole population to evolve at the same time..
But random mutations—changes in the DNA—that generate hopeful monsters must be hopelessly rare, and to make things worse, major taxonomic groups tend to appear suddenly in clusters, as “adaptive radiations” (Gould & Eldredge, 1972),
Of course they do!
Furthermore, evolution does seem to proceed top-down, from phyla to subphyla, classes, orders, and so on (Valentine, 2004), rather than the converse, as predicted by Darwinian and neo-Darwinian natural selection of small random mutations.
A significant evolutionary novelty leads to a new species which then is improved by successive refinements which eventually stabilise into discrete groupings to form sub-species.
Darwinism wants a breadth first development by random mutations followed by specialisation by selection. This predicts a completely different progression in the fossil record to the process described above, where major changes happen first followed by smaller adaptations to local conditions.
.. & crucially, all the evidence indicates that macroevolution is decoupled from molecular or microevolution.
Yes, high level developmental processes are coupled to survivability within the environment whereas molecular processes are driven by the laws of physics. The two processes achieve independence via the construction of closed-loop feedback systems.
There is still no recognition that the patterns themselves and the biological forms need to be explained in their own right, independently of whether natural selection operates or not, and independently of the action of specific genes
Yes, natural selection whether it operates or not does not generate new forms.
The first distinctive feature of our epigenetic theory of evolution (Ho & Saunders, 1979, 1982, 1984) is that neo- Darwinian natural selection plays little or no role, based on evidence suggesting, on the one hand, that most genetic changes are irrelevant to the evolution of organisms, and on the other, that a relative lack of natural selection may be the prerequisite for major evolutionary change.
Inheritance is not via ‘genes’, not via DNA.
It is both difficult and risky to become a new species when there is constant pressure to compete for food and outrun predators; best leave this until things have calmed down a bit.
There is no separation between development and evolution.
The scheme presented here actually postulates two separate, self-contained systems that are connected to each other by means of the intended phenotype. The evolutionary system presents a phenotypic change and this becomes the new aim for the developmental system.
The output of one system becomes the goal for another. This is a standard cybernetic principle and with both processes comprised of feedback systems, the whole system remains highly stable to perturbation.
Reversion to breed average
Reversion to breed average is a phenomenon known to breeders of pigeons, dogs and cattle whereby certain features can be attained by selective breeding but will only last a couple of generations before reverting to the breed average.
Darwin accepted in chapter 1 of On the Origin of Species that: “our varieties certainly do occasionally revert in some of their characters to ancestral forms.”
This is inconsistent with Darwinian evolution which is assumed to arise from a process that is directionless, without purpose and lacking in either foresight or memory.
The phenomenon is not inconsistent, however, with the idea of evolution-by-cognition. Various interpretations can be made, with just one possibility being that any modification to phenotype comes with an automatic expiry date. The new adaptation is given a trial period of three generations, say, for assessment and if, after that time, no advantage is perceived then the reversion occurs.
Another idea is that herd animals in particular will tend to copy the herd phenotype under the assumption that it is probably close to optimal and that ‘standing out’ from the rest of the animals is not a particularly good idea.
Arguments such as this are common amongst Darwinian theorists. Even though there is no direction or intent in Darwinian theory, the temptation to use such language is just too much for them. This says something about the nature of the phenomenon they are trying to describe, which is that it is obviously teleological in nature and they are in simple denial of this fact.
Here, however, we a quite at liberty to describe such processes as intentional and directed because this is the whole premise of the theory!
Transgenerational epigenetic inheritance
“Transgenerational epigenetic inheritance is the proposed transmission of epigenetic markers and modifications from one generation to multiple subsequent generations without altering the primary structure of DNA” – Wikipedia
So a form of inheritance that is independent of meaningful changes in DNA is possible and can persist over several generations.
The article from Wikipedia is full of statements supportive of the ideas presented in this page:
Feedback systems are involved at the molecular level
Information from the environment results in meaningful changes to the organism
Feedback loops persist cross several generations (are inherited)
This has nothing to do with DNA
Epigenetic inheritance may only affect fitness if it predictably alters a trait under selection. Evidence has been forwarded that environmental stimuli are important agents in the alteration of epigenetic processes.
Positive and negative feedback loops are commonly observed in molecular mechanisms and regulation of homeostatic processes. There is evidence that feedback loops interact to maintain epigenetic modifications within one generation, as well as contributing to TEI in various organisms, and these feedback loops can showcase putative adaptations to environmental perturbances.
The feedback loops seen across multiple generations because of TEI showcases a spatio-temporal dynamic that is associated with TEI alone.
This is describing a cognitive feedback system but without using the word ‘cognitive’.
Examination of the specific reactions between molecules or even the feedback loops controlling them is not particularly useful, as the important factor is the overall organisation of such processes and the nature of such organisation. The activity of individual molecules is always subservient to the over-arching teleological aims of such a system.
Lamarckism
Lamarckism, also known as Lamarckian inheritance or neo-Lamarckism, is the notion that an organism can pass on to its offspring physical characteristics that the parent organism acquired through use or disuse during its lifetime.
Lamarck argued, as part of his theory of heredity, that a blacksmith’s sons inherit the strong muscles he acquires from his work.
What Lamarck claimed was the inheritance of characteristics acquired through effort, or will. – Wikipedia
This is interesting: What is meant by ‘use or disuse’ and why should this result in the inheritance of characteristics? Is there a record kept somewhere of the degree of usage of each characteristic?
One interpretation is that ‘usage’ consists of a collection of biochemical reactions that result in a permanent change to the muscle mass, say, and that this is then passed on to the next generation.
This doesn’t really help though, as we still need some way of encoding these changes to trillions of cells in order to pass them on and integrate them into the developmental process so that a modified adult can benefit from them. The encoding of volumes of complex information into a compact and meaningful format is otherwise known as ‘cognition’, so similar processes are at work here also.
A better way of describing ‘usage’ is to recognise that the physical action of wielding a hammer is preceded by an intention to uses and accompanied by an acquired cognitive pattern of movement, energy flow and recovery, involving preparation, action and feedback. It is proposed that it is precisely this ‘pattern’ that is passed on to the next generation. The response of individual muscle cells is irrelevant and all that matters is the workings cybernetic control system already programmed by the parent blacksmith.
“What Lamarck claimed was the inheritance of characteristics acquired through effort, or will”
This is more like it! The characteristics have been acquired through ‘effort and will’ through an encoded summary of ‘proprioceptive activity’.
Evolutionary developmental biology
Wikipedia comes out as Lamarkist:
“Evolutionary innovation may sometimes begin in Lamarkian style with epigenetic alterations of gene regulation or phenotype generation, subsequently consolidated by changes at the gene level.” – Wikipedia
So evolution is via some form of Lamarkism .. but genes are involved somehow!
The gene is involved by ‘consolidation’; but what does this mean? The development did not need genetic alteration to start with so why does it need it for subsequent generations? The alteration of genetic information is described here as:
Non-random
Subsequent to phenotypic change
As a consequence of phenotypic change not the cause
As the end point of cellular activity, not the start point
In what sense can genetic changes of this type be considered ‘causal’? In what sense is it ‘consolidation’? This is deceptive language, an attempt to give prominence and salience to changes in genetic structure where no causal relationship has been established.
August Weismann
Weismann conducted the experiment of removing the tails of 68 white mice, repeatedly over 5 generations, and reporting that no mice were born in consequence without a tail or even with a shorter tail. He stated that “901 young were produced by five generations of artificially mutilated parents, and yet there was not a single example of a rudimentary tail or of any other abnormality in this organ.” – Wikipedia
What was he expecting? The slicing off of the tails by Weismann bypasses the cognitive system of the rats. They did not want this to happen and did not see any evolutionary advantage for it and so there is no reason for lack-of-tail to be inherited. The tails were just cut off and this gives no clue as to how this might be integrated into the developmental processes of the rat offspring.
Weismann should have checked to see if the rats had acquired a fear of knives – or of Weismann himself!
Strength is built by cognitive processes
From mental power to muscle power–gaining strength by using the mind Ranganathan et al found that:
Participants increased muscle mass and strength by simply imagining that they were performing exercise. Similar effects were obtained from runners and weight-lifters.
Yes, muscular development, usage, maintenance and repair are all cognitive processes as opposed to simple chemical reactions. The details of a billion molecular interactions cannot be inherited, but the finely tuned parameters of a closed-loop repair system can.
The Cambrian explosion
If there is no direct feedback from the environment into the developmental processes of animals then we would expect evolutionary change to be slow and effectively random with respect to the needs of survival, with any meaningful structure arising only from natural selection.
If an interpretive cognitive system could somehow develop and manage to connect to embryonic development then we should expect a sudden increase in the speed, diversity and appropriateness of evolutionary development.
This is what we see about 540 million years ago with the Cambrian explosion.
Several other such ‘explosions’ exist and other factors provide obvious explanations, for example the development of eyes, nervous systems, locomotion and large brains. All these require the phenomenon described here as ‘cognition’.
Describing evolution as being driven by the ‘motor of ecological change‘ is an explicit admission of top-down causation as you will find.
“We’re inventing nervous systems, we’re inventing eyes and other sensory systems.”
“So you have a whole cascade of feedbacks and it’s really just a matter of trying to pinpoint the trigger versus all the other consequences that flow from that point”
“(An increase in oxygen) is not enough, you also need a mechanism to drive the system into a runaway situation. You need the system to work very fast .. and one of the ways is ecological feedback“
“A good part of what is happening in the Cambrian explosion must be the motor of ecological change.“
When did cognition develop?
Watch the neutrophil below chasing a bacterium.
This single celled animal has no brain, eyes or musculature and not even a nervous system but still seems to know exactly what it is doing.
It is aware of its own environment, has a sense of proprioception and engages in purposeful activity. It is sensing ‘food’, making decisions as how to go about getting it, making an intention to move and then chasing its quarry whilst responding to its efforts to escape at lightning speed.
All this activity implies feedback, awareness of self, formation of purpose and teleological causation; in short: ‘cognition’.
Evolution from bacteria to human should be relatively fast compared to what has preceded this.
Intelligent design vs. atheist Darwinism
The intelligent design lobby are consumed by the idea that complex organisms cannot arise from random changes and require an intelligent process and an informational template.
Neo-Darwinists seem likewise driven by a need, outside of any scientific consideration, to eliminate the requirement for a divine being.
The above hypothesis has the intelligent design reside within the organism itself rather than in the mind of an external creator. The design arises from a sort of bootstrapping process within the cognitive system itself, this is to say, within the organism itself.
Both atheists and ID advocates are therefore happy with this scenario.
The complex structure of phenotype that we see is only ‘apparent’ and not ‘fundamental’. What is fundamental is the cognitive structure that mediates between phenotypic requirements arising from interaction with the environment and the physical requirements of the developmental process.
The true evolution is therefore the evolution of cognition and evolution of development. An intelligent feedback system sits between the The Laws of Physics on the one hand and Environmental Reality on the other and tries to mediate between the two in an effort to survive and procreate.
Structure and complexity arise, not just from the input of information from the environment but also from the need to reconcile this with the basic laws of physics and the emergent biological laws that arise from self-organisation of cellular systems.
This ‘intelligence’, then, ‘knows’ all about development, has a memory, is able to formulate plan, make decisions and recover from all sorts of perturbations and upsets as it pursues its teleological goals.
Cognition as a ‘connection space’
The schematic below is intended to show how information from the environment is able to affect events at the molecular level.
A system of top-down causation (here left to right) proceeds from macro-level reality via an interpretive, cognitive bio-field to direct events at the cellular level. The self-organising properties of the cells, so called ’emergent’ properties, are utilised to then regulate the actions of organelles, molecules and even individual atoms.
The inclusion of the emergent properties here is important. The cognitive system knows nothing about the nature of the atom and only interacts with the layer ‘above’ by means of biological laws which arise as a consequence of the self-organising properties of large groups of cells.
It is the cognitive layer that acts as an informational bridge, connecting the macro world to the micro in a meaningful way.
It is the cognitive layer that evolves, that persists across generations and adapts according to environmental conditions. This must be the case as the laws of physics are fixed and self-organisation happens via emergence, i.e. as an inevitable consequence of the lower laws; there is no room at this level for innovation.
‘Causation’ is different from ’emergence’ and is the result of signals from the cognitive layer acting upon the emergent layer; causation is proceeding from top to bottom. The cognitive system accepts feedback from the emergent layer itself and becomes a learning system, with the lessons learnt being passed on to the next generation.
The laws of physics are immutable whilst the laws of biology are common to all living cells and so neither need an inheritance mechanism. The only structure that needs passing down the evolutionary line is therefore the cognitive bio-field that will interpret information from the environment and use it to organise the development of a new organism. In computing terms this is an adapter class, mediating between the two separate worlds of external reality and internal development.
We are now entirely justified in saying that evolution is “driven by the motor of ecological change“.
Stability in biological systems
Stability in biological systems is maintained by means of engineering-style control systems at all levels of scale from the molecular to the ecological,
And end goal exists for each sub-system which continuously monitors its own performance using feedback mechanisms and performs corrective action as necessary to keep itself on track.
The schematic below is a nice illustration from the world of project management. A control system is shown taking input, delegating work to a subsystem and producing output.
Both the main system and the subsystem stabilise themselves via feedback and the main system accepts feedback from the subsystem. The two systems thus linked form a larger system which is itself a control system and is stabilised by its own internal feedback between the subsystems.
Praxis framework: cybernetic control
“While I think we can be certain that multi-level causation with feedbacks between all the levels is an important feature of biological organisms, the tools we have to deal with such causation need further development. The question is not whether downward causation of the kind discussed in this article exists, it is rather how best to incorporate it into biological theory and experimentation, and what kind of mathematics needs to be developed for this work.” – Denis Noble
The mechanism
Previous theories of evolution have floundered for want of a suitable mechanism.
We need:
Some way of storing vast amounts of information
A means of moving such information around
A way of integrating new information
Some mechanism for translating such information to molecular action
The chosen mechanism will be stable and robust to physical, chemical and electrical influences
Existing physics will suffice; no new fields or forces are required
Now since all forces at the level of biology are electromagnetic in nature and the only way to move charged particles around is an electromagnetic field, we can only consider such a field as a realistic candidate for a cognitive bio-field.
A toroidal ring vortex fits the bill. Watch these things flow through water to see how stable they can be and how energy is managed and conserved. Water loses energy owing to friction but an electromagnetic vortex has no such losses and will hence keep going forever.
Ring vortices can be seen to survive splitting into two (reproduction) and can merge together seamlessly (reproduction!). They can change proportions to pass through a narrow conduit and will survive minor perturbations in shape by simply re-stabilising to the original form.
Konstantin Meyl has proposed that such rings can absorb energy via transfer from the environment (Gibb’s energy) and even from the energy from solar neutrinos. We therefore have an energy structure that is arguably as stable as a molecule whilst at the same time more flexible and more amenable to the storage of information.
How information is stored on these rings is not exactly clear but it is easy to imagine several of these merging together with the result that their individual informational content is also merged by a natural physical process.
Such a merging could be used to integrate impressions within the brain and also to merge together information from male and female gametes. Darwin’s idea of gemmules now seems more reasonable; information from all over the body is packed into ring vortices which flow to the reproductive organs and is merged into a single wave complex for passing on to the next generation. (See: Telegony)
The image that springs to mind is as drawn by Nicholaas Hartsoeker (right), except that instead of a miniature homunculus we now have a complete representation of a bio-field template for a human being. The ‘shape’ is included somehow and the developmental process will work out how to attain the final form.
Action of the bio-field upon emergent phenomena
In the video, a number of heart muscle cells beat in coordination. Each cell is capable of beating individually but here electrical signalling has enabled the cells to coordinate at a distance and a beating ‘muscle’ has formed.
This is an ’emergent’ property as far as we know. There is no central command structure controlling or coordinating the rate or synchrony of contraction.
If the muscle were to be shaped into a tube and twisted into a spiral then we would no doubt see a wave of propagation flow around the heart in the familiar pattern. However, this still isn’t a heart; what is needed is an extra cognitive layer on top to assimilate signals from the rest of the body and respond by triggering muscle contractions at the appropriate rate.
‘Cognition’ here does not interact directly with the physical stuff of the body but instead interfaces with the emergent features of biology via their local electric fields.
The energetic work of contracting heart muscles is implemented by the cells themselves and the coordination is via signalling between the individual cells but the overall pace setting is fine tuned by a somatic intelligence which knows nothing about the workings of a cell but everything it needs to know about the top-down requirements of the cardiovascular system.
A general principle
A general principle can be hypothesised whereby the high level cognitive aspects of biological systems have their effect, not by direct action upon the physical matter but by exerting subtle influences upon the emergent features by means of local bio-field interaction.
This will apply to beating hearts, vascular dilation, movement and proprioception, embryonic development, morphogenesis and even epigenetic feedback loops.
The inherited characteristics will consist of just this: a complete instruction set of all the high level knowledge needed to produce a new organism. In particular, phenotypic blueprints do not consist in any way of a physical shape to be attained but rather a complete set of cognitive instructions on how to achieve final bio-field stability by manipulation of the same features that will emerge from the development process itself.
Once this principle is understood the whole process above sounds much more plausible and the whole of biology simpler to comprehend. Formulations of Life as almost synonymous with ‘complexity’ now seem naïve. Instead, try to regard Life as having a very specific structure which actually reduces complexity to a bare minimum.
The face of a frog
In this short video from Tufts University shows an electromagnetic field emerging as if out of nowhere. The field assumes the shape of the face of a frog where brain, eyes mouth etc. are visible before the physical organisation of the cells has even begun.
One way to interpret this is that it is the field that is ‘causal’, it is the field that contains the relevant information for the organisation of the physical matter, that is to say, it is a morphogenetic field.
In accordance with the general principle above, the cells simply divide for a while, at first simply reproducing and accumulating energy, but soon self-organising to create an emergent bio-field which naturally acts as an antenna for the morphogenic field. See: Bio-field emergence
This field has been inherited from the parents and once it connects to the developing cellular mass will proceed to organise, via the emergent field, the development of the cellular collective into a complete frog shape.
The evolutionary origins of sexual reproduction
If a primitive organism, an amoeba for example, were to somehow absorb some bio-field information from another by transference of physical material, then our amoeba will be ‘enhanced’ somewhat; it has acquired some new cognitive capabilities. It passes on this information when it divides and the new abilities persist down the generations and are integrated into the ecology; the amoeba has ‘evolved’ as a species.
The propensity to engage in this sort of activity proliferates and very soon sexual reproduction is de rigueur in the amoeba community. Evolution now happens very quickly.
Once organisms have developed sufficient cognitive ability to recognise propitious adaptations in other individuals then they will actively try to acquire those abilities, by either mimicry or the sharing of bio-field fragments. Certain individuals now appear more attractive.
As complexity of phenotype increases then so does the complexity of the reproductive process which nevertheless retains a high salience within the cognitive system. The sharing of bio-field material may appear more ungainly in animals whose phenotype has evolved for other purposes, but has also become more efficient, with the packing of all relevant information into specialised gametes for the specific purpose of transfer between individuals.
Parthenogenesis in humans
It appears that the fertilisation of the egg by a sperm is not as necessary as one might think for human reproduction to take place:
“Although reproduction in most mammals occurs through mating between male and female, it has been hypothesized that presence of rare cases of parthenogenesis in humans that result in normal and viable individuals go unnoticed due to the absence of congenital anomalies”
“Parthenogenesis is not as rare as previously thought but can go unnoticed with an ovarian teratoma outcome or even a full-term birth, particularly, in the presence of a male partner.” – Hegazy et al
The peacock’s tail
An AI engine, presumably neo-Darwinist, gives a summary: “The peacock’s tail evolved through a process of sexual selection, where peahens chose mates with the most impressive tails. “
Neo-Darwinist evolution is supposed to be by random fluctuation but again the preferred explanation is one of evolution-by-cognition.
What is meant by ‘most impressive tails‘? Why are some tails more impressive than others and how did the idea of ‘preference’ in this context arise from random fluctuations in the first place? Both concepts require the idea of ‘cognition’ as a prerequisite.
If we accept the idea that cognition and preference contribute to phenotypic change then we can think that in addition to the females preferring exhibitionist males, the males themselves will soon catch on to the idea and start to produce more and more outlandish plumage as a result.
The urge to reproduce is strong and males, being males, will only stop when they get eaten by predators or fall over owing to the weight of their own tails.
This development now has nothing to do with randomness but can legitimately be said to be driven by a combination of cultural norms and the more fundamental need to reproduce.
Developmental plasticity and the origin of tetrapods
Scientists raised bichir, a type of fish with lungs, exclusively on land to see what would happen:
The researchers discovered the bichir raised on land were dramatically different than those raised in water. The land-raised fish lifted their heads higher, held their fins closer to their bodies, took faster steps and undulated their tails less frequently and had fins that slipped less often than bichir raised in water. These land-based fish also underwent changes in their skeletons and musculature that likely paved the way for their changes in behavior. All in all, these alterations helped bichir move more effectively on land.
These findings reveal the bichir is more plastic — that is, malleable — during its development than previously thought. This plasticity is what made this fish capable of growing up very differently depending on its environment. – LiveScience
So functional behavioural and structural changes have been implemented by the fish themselves in response to environmental challenges.
The adaptations were certainly not random though; they were specifically directed towards a certain goal. This now paves the way for a selective process to occur.
The problem with neo-Darwinian selection is that development must precede selection, i.e. a feature must arise somehow before it can be tested for ‘fitness’ and the only solution presented by the neo-Darwinists is via random mutations. With these fish, however, we see that a certain degree of adaptation takes place as a direct response to environmental challenges and is immediately tested for suitability in the real world.
If, now, some fish do not survive to reproduce but others do, we may say that some sort of selection has taken place.
Unfortunately, these researchers did not have enough time to see if these changes were inherited.
Summary
The process of evolution presented here is one of a very specific relationship between the developmental processes and evolutionary. The central hypothesis is that evolutionary cognition is responsible for interpreting environmental information and setting a new teleological goal for the developmental processes.
Many authors have described similar schemes but have been missing a few key ingredients:
Recognition of the feedback system as fully ‘cognitive’
An electromagnetic field as the substrate for that cognition
The specific relationship between developmental and evolutionary processes
Rejection of DNA as the mechanism of inheritance
What we would like to see is that an organism such as a fish, is able to set a new phenotypic goal in response to environmental pressure and produce offspring that in some way have evolved towards that goal.
We don’t quite have this unfortunately but we do evidence for every part of the process separately:
The obvious existence of feedback systems at all physical scales
The inheritance of acquired characteristics is demonstrated
An induced fear response is inheritable
Goal oriented behavioural patterns are inheritable
Phenotypic changes (increased muscle mass) can be induced at will
Phenotypic changes induced by selective breeding can persist across generations
Memory can be inherited and is independent of physical order
Phenotypic changes arise from modifications to the developmental process
Induced changes to the developmental plan can be passed to the next generation
The fossil record seems to support the rapid emergence of new species
And finally: ‘All’ evolutionary theorists talk about evolution as if it were directed and as if adaptations arise as a necessary consequence of environmental pressure. This is true even of neo-Darwinists, who, whilst insisting that evolution has no direction at all, nevertheless cannot resist the temptation to talk about it as if it does!
It appears that there is no diffusion of gases through the lining of the lungs but that energy in the form of electromagnetic vortices is transferred from the oxygen gas in the air, directly into ADP molecules in the bloodstream.
The ADP molecules flow to the cells and this energy is used to facilitate cellular processes.
There is no transfer of oxygen gas from the air to the bloodstream.
Mainstream view
The accepted narrative is that oxygen gas is inhaled into the lungs whereupon some of it diffuses or otherwise passes through the lining of the lungs, through the capillary walls and into the bloodstream. This oxygen reacts with carbon to release the energy used by cellular processes and carbon dioxide is produced as a waste product. This CO2 then passes back through the lungs past the incoming oxygen and is expelled as we exhale.
Problems with this idea include:
No credible mechanism is described by means of which oxygen passes one way through the membranes. CO2 moves in the opposite direction and nitrogen is prevented from moving either way; but how?
Fish manage to breathe somehow despite having no access to gaseous oxygen. The assumption that gaseous oxygen and the dissolved version are pretty much identical is simply not justified.
Gaseous oxygen molecules are actually quite huge (see below) and are if we are to believe that they do indeed pass through a biological membrane then we will need some actual evidence for that.
Techniques for measuring the proportion of oxygen and CO2 in exhaled air do not take into account the possibly altered state of the oxygen itself and in addition adhere to an outdated theory of gases.
A better explanation is available.
Meyl’s hypothesis
Professor Konstantin Meyl describes a gas as consisting of molecules where the electrons have come out of their n=1 orbital and formed a ring around the outside of the rest of the atom. The reduced field strength here has enabled them to expand to some 30,000 (!) times their original dimensions.
The diagram depicts a gaseous oxygen molecule comprising an O2 ‘nucleus’ surrounded by 8 electrons in a ring.
Each electron has its own electric field spin and this results in a magnetic dipole for each particle. The electrons stick together via the magnetic field and are kept apart by the electric field..
The electrons have their own local spin and the ring will in addition rotate as a whole. Al this spinning constitutes ‘energy’ and the system is able to accumulate energy from the outside, store it and release later it as conditions permit.
This expanded molecule has its own magnetic dipole and will thus adopt a specific orientation with respect to other gas molecules in accordance with the laws of electromagnetism; gas has a structure.
This model provides a nice explanation for Avogadro’s Law and Meyl actually derives Avogadro’s constant from theory in the video; it had hitherto been thought of as a fundamental constant of the universe to be approximated only experimentally
Respiration
The vortex energy (field rotation) from the spinning electron ring is transferred to the rotational energy of a phosphate group of an ADP molecule in the blood. This ADP is carried away to the cells where it can release the energy to do something useful.
The oxygen in the lungs is now energetically depleted somewhat and is exhaled.
Constant breathing of the air will cause it to lose more energy still, making it ‘stale’ and unhealthy (What causes pneumonia?). Repletion from the atmosphere is needed.
“Proof is provided, for example, by top athletes who give off significantly more energy than they absorb through food. Similarly, migratory birds on a non-stop flight violate the law of conservation of energy . What they eat in addition to food obviously comes from the air.” – Meyl (Die-Covid-Falle)
Microwaves and 5G
Exposure to microwaves at around 60 GHz is conjectured to interfere with the transfer of energy, leading to the possibility of whole flocks of birds to fall from the sky and the spontaneous collapse of Chinese citizens coinciding with the rollout of 5G in Wuhan.
“First of all, the advantages of the respiratory system for rural dweller should be emphasized. It is insensitive to static and low-frequency interference. Even high frequencies up to 1 GHz only have a minor influence. However, high and maximum frequencies above 2.4 GHz are used in mobile communications.
“Extreme frequencies, such as microwaves and above, can disrupt or hinder the rotation of gas molecules. There are speculations about 60 GHz, at which spontaneous death can occur under certain circumstances. If at a certain maximum frequency the gas ring can no longer be absorbed by the phosphorus tail of the ADP, then we immediately no longer get any energy.
“In this way, in tests, entire flocks of birds have been taken out of the sky during flight. You have the deadly frequency switched off again as quickly as possible. There was silence about this and the crime against nature was covered up. The telecom industry has it individually left to brave citizens to report on the murder of the animals in alternative media in order to then denigrate them as crackpots and conspiracy theorists” – Die-Covid-Falle
ADP/ATP cycle
Mainstream opinion is that ADP is converted to the higher energy molecule by the addition of an extra phosphate group and that the loss of this and consequent conversion back to ADP is a source of energy for the mitochondria.
Meyl, however is claiming that ADP and ATP act independently as vectors for vortex energy, with ATP managing to acquire extra rotational energy owing to the additional phosphate group. Mainstream has the energy stored in ‘bonds’ whereas Meyl has it in ‘rotational energy’.
There appears to be no need for ADP and ATP to be continually transforming from one to the other.
“In my opinion, I would like to conclude by saying: After the rotation and transport have been transferred, the mitochondria undergo refining and the rotation of the ADP is taken over by the ATP. The ATP molecule has a tail that is longer by one phosphorus.
“Now the transport continues to the muscle cells, the heart muscle and the thinking apparatus.
“The rotation is used as needed. That is why ATP and ADP with a lot and a little rotational energy can be detected in the blood everywhere.” – Die-Covid-Falle
This all sounds entirely reasonable and in tune with the laws of physics, so why have we believed for so long in the oxygen/CO2 cycle?
Exhaled air
Conventional wisdom says that exhaled air contains less oxygen that inhaled air and in the same percentage as the increase in carbon dioxide. All sources seem to quote the same figures although finding a decent experiment that proves these has proved problematic.
The coincidence of proportion is not actual proof by itself of transfer across a membrane and we can certainly question the accuracy of these results..
Measuring CO2
The proportion of carbon dioxide in the air is commonly measured by the amount of infra-red absorption. This is no doubt fine if the only thing that has changed is the amount of CO2, but here we are measuring air that has been exhaled.
Such air may well contain less oxygen (not according to the above), may well contain more moisture and in any case contains oxygen that has been depleted of energy.
It doesn’t seem unreasonable that depleted air may well absorb more extra infra-red radiation simply because of that fact, that it is low on energy and in a more ‘receptive’ state.
Measuring oxygen levels in exhaled air is, if anything, more complicated than measuring CO2 as multiple factors such as temperature, humidity and pressure will affect the result.
The oxygen content is not measured directly but is calculated according to some formula that assumes the Theory of Ideal Gases, uses some empirically derived ‘constants’ and is relative to a ‘calibration’ value.
All fine except we are now saying that the oxygen itself can be in a markedly different state in exhaled air and, moreover, that the fundamental concept of gases is now highly deprecated. We are therefore justified in adopting a highly sceptical attitude towards existing techniques.
Fish
Fish are clearly getting their energy direct from the water somehow whether it comes from the dissolved oxygen or not.
The architecture of the gills is markedly different from mammalian lungs and this reflects the difference in viscosity between water and air. Water will not circulate properly in lungs and will not empty properly upon attempted exhalation.
The continuous flow of water through the gill structure is an obvious reflection of this.
Summary
We now have:
A credible mechanism by which energy passes from the atmosphere to the bloodstream
For the first time in history, a theory of gases that adequately describes the familiar and observable phenomenon of ‘pressure’
A theory of gases that is consistent with Avogadro’s and other empirical laws
A derivation of Avogadro’s number, a supposedly ‘fundamental’ constant
A mechanism by which 5G and other microwave technologies can directly affect physiological processes
An alternative and believable explanation for the ADP/ATP energy transfer
References:
Potential vortex, newly discovered properties of the electric field are fundamentally changing our view of the physical world – Konstantin Meyl https://www.meyl.eu/go/indexb830.html
An argument is presented that the absence of a heartbeat does not necessarily imply an absence of circulation and that given appropriate energetic input, the heart may be persuaded to start beating again and a body may come back to life. Accounts of cosmic events support the hypothesis.
There is a great deal of scepticism surrounding the idea that a man, once pronounced dead, could come back to life at some time later. This is largely owing to the idea that the stopping of a heart is synonymous with cessation of circulation and that this leads inevitably to a rapid and irreversible state of death.
However, the heart is not the main driver of circulation, it is not a pressure pump (The Heart is not a Pump) and the main impetus for the blood flow comes from the capillary beds according to many researchers. See: The Heart and Circulation.
Circulation continues after cardiac arrest
“S A Thompson showed that in asphyxiated dogs, residual circulation continued for up to two hours after cardiac arrest, Other research has shown 20% to 40% increases in cardiac output after occlusion of thoracic aorta” – Branko Furst
“The blood in chick embryos is seen to be circulating before the heart has been formed and vortices are observed in the blood in between pulses.” – Furst
“When the heart (of a chick embryo) was stopped, blood continued to flow for approximately 50 minutes, albeit at a lower velocity. When IR was introduced, the post-mortem flow increased..” – Li, Pollack
Galvanic energy
Somewhere way back on my Twitter feed is a photocopy of a page in a book written by some Victorian scientist who took recently deceased bodies from a morgue and drained the blood from them. The blood was re-vitalised by the introduction of galvanic energy and then replaced into the body. The blood started to flow by itself, the heart would restart and the bodies would exhibit muscle movement.
Given what I now know about the blood and circulation, I am now prepared to believe that such a thing is possible. However, it isn’t the galvanic energy that causes the blood to flow but the infusion of blood which creates the vortices necessary for propulsion. The flow of blood is causes the heart to start beating again.
Vortex flow hypothesis
Many researchers have come to the eventual conclusion that it is not the heart that is pumping the blood at all but that the ‘blood moves under its own momentum‘ (The Heart and Circulation) or some such phrase. This seems outlandish, but then so are the circumstances.
The phenomenon of vortices is repeatedly mentioned in connection with the flow of blood but the importance of this does not seem to have been appreciated.
The hypothesis here then is that the blood is moved by the generation of powerful energetic vortex structures which propel the fluid around the circulatory system and that the resulting flow is actually instrumental in causing the heart itself to beat.
The vortex structures have an electrical component and are able to absorb energy from the environment as a power source. This energy may include: electric field energy, heat, Gibbs energy and the energy from the solar neutrino stream.
The vortices are generated in the capillary beds but are starting to deteriorate by the time they get to the heart. It is here that they are re-organised by the spiral flow of the heart into the geometric structures conducive to efficient, almost friction-free flow.
In this paper some ATP and other biological substances were added to water and the solution placed into square channels of various dimensions.
The water was seen to self-organise first into a collection of vortices as shown and second into a self-sustaining directional flow along the tube.
This begs the question: “Where does the energy come from to sustain such a flow?”
One idea is that the ambient heat is translated to a vortex structure and in this way contributes to such a flow.
Another idea comes from Konstantin Meyl in his book “Scalar Waves: a first Tesla physics handbook”. Meyl suggests that the vortex structures are capable of absorbing the occasional passing neutrino and transducing them to vortex energy within the water.
The hypothesis then is that during crucifixion the heart stopped and life signs diminished considerably, but irreversible death had not yet occurred and there was still some circulation of blood driven by the vortices from the capillary beds.
Possibly movement of the body and the change from vertical to horizontal posture caused blood movement which created more vortices which increased blood flow through the heart, causing a gentle pulsating maybe.
A sudden addition to the energetic input from an increase in intensity of the solar neutrino stream further promotes blood flow and the heart begins to beat in earnest again. Life signs improve and normal functioning is resumed.
Eclipses and the solar neutrino stream
How could this come about? Konstantin Meyl is claiming that during an eclipse, the Earth acts as a lens and neutrino density on parts of the planet increase by a factor of 20,000! We should therefore be looking for evidence of a solar eclipse.
“Scalar waves..” – Konstantin Meyl
It turns out that there are plenty of references in the Bible itself that support this idea:
In Luke 23:44-45 of the New American Bible Revised Edition — the Bible translation heard at U.S. Catholic Masses — the verses explicitly refer to a solar eclipse preceding Jesus’ death: “It was now about noon and darkness came over the whole land until three in the afternoon because of an eclipse of the sun.”
Both the New Revised Standard Version Catholic Edition and the New Jerusalem Bible — EWTN founder Mother Angelica’s favorite teaching Bible — simply state “the sun’s light failed.”
The Contemporary English Version — a New Testament translation approved by the U.S. Conference of Catholic Bishops — declares, “The sun stopped shining.”
The Douay-Rheims version of the New Testament — published in 1582 and used for centuries by English-speaking Catholics — asserts “the sun was darkened.”
Red blood cells are usually separated from each other and freely flowing but under some circumstances will clump together in stacks resembling piles of coins. Various medical conditions are associated with this condition and exposure to some forms of electromagnetic radiation can lead to rapid formation.
What causes this? A decent explanation is nowhere to be found. Medical websites will claim an association with a medical condition such as diabetes and then pretend that this is a ’cause’, but this is doesn’t explain in terms of the laws of physics why a bunch of negatively charged cells should stick together; why this particular formation? What are the physical forces?
This post takes a look at the usual description of the zeta potential, shows that it is inadequate as an explanation and provides a more credible hypothesis involving the flow of electric field currents and magnetic dipoles.
Magda Havas here demonstrates how easy it is to create the rouleaux formation in the blood of a healthy individual by a short exposure to cell-phone radiation.
Zeta potential
A common claim is that each red blood cell (RBC) is covered with a coat of negative charge which is responsible for keeping the cells apart. Negative charges repel and this stops the RBC from getting too close together, let alone sticking to each other.
However, if this is the case then how do the cells ever stick together at all?
“Loss of zeta potential” is hardly an explanation. If even a small negative charge is left on the cells then they are never going to stick together so we must have a cell losing all charge completely but since when did this cause things to stick together?
Moreover, we are left with the questions of what happens to all that charge and where does the new charge come from when normality is restored?
Other models describe RBC surrounded by alternating layers of negative and positive charges. This sounds logical when you initially read the descriptions but in reality raises more questions than it answers.
How are these layers maintained?
Why don’t the positive and negative charges just attract each other, stick together and cancel each other out?
If an RBC is surrounded by a positive layer then why don’t other RBC with negative layers just stick to that layer?
Why do they stick together sometimes and not others?
What has this to do with mobile phone radiation?
Any description such as this in terms of ‘static’ laws is going to have problems explaining why the cells stick together in some circumstances but not others.
Why is the stacking so precise? This isn’t down to any weakening of anything but some strong force that attracts the cells together and maintains an orderly alignment.
Purnell and Ramsey claim an electric circuit around the circumference of the red blood cell as shown. A circular current necessarily produces a North-South magnetic dipole, shown here by the the pink arrow.
Purnell and Ramsey
This is getting somewhere, although still problematic. What is it that fuels this current and how does it work exactly? One idea is that the current is generated essentially by chemical means but what happens when the energy runs out? What happens to all the fuel waste and how does the RBC refuel?
Merab Beraia has the blood cell ‘re-charge’ every time it passes through the heart: Electromagnetism and blood . The heart generates a strong electric field and blood passing through it will have electric currents induced within it and will in addition acquire a high frequency acoustic vibration which further serves as an energy supply for the RBC on its travels.
This sounds likely but still doesn’t explain the rouleaux formation.
Th common idea of electricity as a stream of moving electrons is highly questionable. See here: What is electricity?
Hypothesis (vortex physics)
The red blood cells do not have circles of moving electrons but instead host stable toroidal field movement in the shape of a ring vortex. This field movement necessitates a magnetic dipole field as shown above.
The magnetic dipole ultimately will arrange the RBC in the rouleaux formation of North to South poles all in a line. It is these magnetic attractive forces that cause the sticking together and the negative field forces that stop the cells actually making contact.
We have a common pattern of field forces that are long range attractive but short range repellent.
The human bio-field
A global bio-field exists in the human body which consists of electromagnetic fields organised in a toroidal fashion and is measurable outside of the body. Energy in the form of tiny ring vortices circulates inwards in a spiral fashion towards the ‘chakras’ of the body and in particular, the heart and the vessels of the circulatory system.
Field energy moves through the walls of the arteries and into the bloodstream where it is available for absorption by the red blood cells.
The currents in the RBC are therefore reinforced by the injection of this energy. No chemical reactions are needed to sustain the circular currents.
The journey of the blood
Starting at the heart then, any vestige of rouleaux structures are thoroughly swept away by the violent ‘vortexing’ of the blood as it moves through the chambers of the heart.
Electrical energy is injected into the blood from the heart muscle and the whole lot comes out in an apparently ‘turbulent’ stream. This is not chaos however as all the time, the water and electric fields in the blood must obey the laws of physics and this means a highly structured state of vortex flow.
The RBC are separate and apparently free flowing but all the time will be absorbing energy from their environment, thereby giving the appearance of having their own energy supply. As they proceed towards the arterioles, more energy moves from the bio-field into the arterioles and this will spiral into the blood cells themselves and strengthen the toroidal flow therein.
A stronger electric circuit means stronger magnetic dipoles and as the blood progresses these begin to organise the blood into more linear structures, much like a line of bar magnets. They don’t actually ‘stick’ together because of the electric negative component of the field which always maintains a minimal spacing between them.
This is a requirement for capillary flow. The RBC will need to pass through the capillaries in single file and an orderly procession is assembled long before it is actually needed, nicely avoiding any ‘queuing’ and concomitant loss of valuable energy.
Vortex streams continue from outside to inside of the small blood vessels and appear as an ‘exclusion zone’ in microscopy, keeping nano-particles away from the walls of the vessel.
Charge layers revisited
The idea that the red blood cells are surrounded by alternating layers of charged particles is easy to imagine at the first attempt but falls apart under close scrutiny.
The particles must maintain their disposition by means of electric field forces and they must do this whilst the blood is flowing through the arteries and being pumped through the heart.
Whatever the arrangement of the particles, the forces between them must obey the laws of physics and wherever there is movement of charges through an electric field, there are spirals and vortices; these are unavoidable. The notion that nicely ordered charge layers could survive this needs considerable justification if we are to believe it.
So what causes rouleaux?
If the above argument is correct as to the physics and function of the zeta potential then it would seem that the rouleaux formation is a result, not of a depletion of energy or electric field but rather a surplus.
Bad energy management has somehow resulted in a red blood cell having too much energy. The electric field has increased in strength and the resulting magnetic dipole has become the dominant force, orienting and pulling together the erythrocytes into a configuration that is usually reserved for capillary flow and even then we are seeing a highly exaggerated version of this.
The formation is maintained by the absorption of external field energy and so may persist for some time even on a microscope slide as the whole arrangement sequesters energy from the laboratory.
How the body copes with cell-phone emissions is not completely understood but if the EMF is in the form of, or is converted to, field vortices by the body and makes its way to the blood vessels or heart, then we have an additional unexpected input to the body, blood flow and thence red blood cells, with the result being the rouleaux strings that are observed.
How does grounding work?
Grounding is said to be very effective in restoring rouleaux blood to its usual state. The usual explanation is that the blood is clumping because of a lack of charge, meaning electrons, and that the process of grounding is a way of obtaining electrons from the Earth thereby resolving the deficit.
This formulation of grounding is highly problematic though, with just a few of the issues being the questions of how the electrons are supposed to infiltrate the body, how is it determined where they go to, and what is the mechanism by which they are transported.
Electrons have no intrinsic energy or motive force of their own and so all the work must be done by the body to move a highly reactive particle through a highly charged landscape.
The formulation in terms of vortex energy gives a much better explanation. The process of grounding does not add anything extra to the body but allows surplus energy to drain away, which is precisely what is required.
Excess vortex energy makes its way to the Earth and the global vortex system is relieved of stress. In due course this impacts upon the blood at the centre of the system, the tight rouleaux formation is loosened and the individual blood cells can move about independently again.
Exosomes are commonly described as “nanoscale particles formed by a lipid bilayer that encapsulates and carries a biologically active cargo of proteins, lipids, RNA and DNA.“
It has been suggested that these particles are what mainstream science is mistakenly identifying as ‘virus’ particles. After all, they are described in a similar way, look the same, have similar structure and are purported to be the carriers of biological-information via RNA strands.
They have been described as ‘cellular debris’ ie the result of cellular breakdown but at the same time many think they have the function of preserving resources by transportation to another cell. This is slightly contradictory maybe. The phrase ‘cellular breakdown’ suggests an increase of disorder but the preserving of resources sounds more like a transformation of order, a ‘managed decline’ where information and energy is not destroyed but merely redistributed to somewhere else.
They are asserted by the mainstream to be a highly functional part of normal cellular activity, are capable of the ‘transfection’ that the virologists use to support the idea of a virus and have been proposed as the basis for a whole bunch of new gene therapies.
How is such a thing constructed? What is their function? What is their relationship to influenza and gene sequencing? What might be their relationship to vortex physics, solar filaments and energy conservation?
Hypothesis
The nucleus of an exosome is an energy vortex arising within the cell. Because of the ‘concentration principle’ the vortex continues to accumulate energy until it has sufficient to manifest some RNA ‘out of the nothing’ as claimed by Stefan Lanka.
The RNA continues to continues to promote a vortex structure and a new ‘vortex radius’ (a sphere of high energy) is created, Energy accumulates at the radius and is used for the construction of the lipid bilayer whereupon the exosome is complete. It can now be transported out of the cell and either into another cell or simply excreted as observed.
Information transfer within biological systems is via electromagnetic scalar waves. The exosome arises from such a structure; it needs both energy and information for its own construction.
If too much energy should arise within the cell then that energy needs to be expelled, it needs to go somewhere else. Exosome production is a neat solution for this; the energy is packed up in a scalar wave hosted by the RNA and ejected, possibly to be reused in another cell. The construction and transport of an exosome necessarily results in a net loss of energy for the cell, there is no getting around this.
Exosomes are described as ‘biologically active’ but what makes a piece of RNA ‘active’? If the core of an exosome is now a vortex field then this field has built in energy, an intrinsic propensity to ‘move’ to have some effect in a biological system. The word ‘active’ is fully justified in this case.
It is the structure of this vortex, the information held within it, that causes the manufacture of the physical particle and so there arises the possibility that a similarly structured vortex could enter from the outside and that this would trigger the construction process. This is the basis of ‘transfection’ which seems to ‘spread’ the exosomes a bit but doesn’t seem to lead to any systemic disease.
The vortex principle of energy management
The basic laws of electromagnetism mandate that ‘field energy’ will rotate in a circular fashion and will have a tendency to form various helical, toroidal or vortex-like structures. Energy at a distance from a vortex will spiral inwards and a concentration will occur at the vortex radius.
Energy is always available at a vortex centre for various tasks. Classical biology has energy as essentially dissipative and prone to disorder but this makes it very difficult to even imagine what goes on in a cell from an organisational perspective.
The vortex principle starts with a constant flow of energy as a given, with accumulation at various points within the cell. This is the basic flow that the cell has available to it and ‘life’ can be considered as consisting of relatively minor modulations of this general structure.
Energy spirals in from ‘afar’ so there is always a sufficient quantity but if there is too much energy then the excess needs to be dissipated against the vortex flow. The physical movement of RNA away from the cell is but one way to do this and because of the continual inward flow, there is always enough energy around to accomplish this.
Try imagining this the other way around and you just can’t do it. If energy is generally scattered all over the place in a radial fashion and you run out of it then how are you going to refuel? You have no energy left to move anything around or somehow ‘place an order’ for more! You are left with an energy void, a thermodynamic heat death.
The inward spiralling vortex is a necessity from this point of view.
What is influenza?
Influenza is the influence of seasonal variations in the Earth’s magnetic field. The field is twisted into helical structures which discharge from our atmosphere towards the ground. Small body hairs act as antenna and conduct the vortex into the body where it flows around under its own energy along sympathetic conduits.
Flu vortices do not belong in the cells but end up there anyhow and commence to accumulate energy. Exosomes are the result and are excreted. The creation and expulsion of so much tissue necessitates a net energy loss resulting in fatigue and dysfunction at least.
Exosome RNA is sequenced and imagined be viral RNA.
Exosomes and their RNA are therefore associated strongly with influenza but the disease is not transmissible from human to human and it is not the physical RNA that causes the initial problems but the electromagnetic vortex information. The RNA is a result of the energetic disturbance and not the cause.
Caveat: Things are probably not quite this simple and there are likely extra layers of complexity:
Field vortices are likely modified on entering the body either through the skin or body hairs
All input to the body is ‘interpreted’ to some degree and this allows for immunity effects
At some point with influenza a critical point is reached and a systemic reaction takes place which produces the familiar fever etc.
Transfection
The mechanism of transfection is now explained as an active tunnelling of the vortices from one cell to another. This is not so credible with RNA as it has no ‘agency’ of its own. It has no motive energy, cannot reproduce and so cannot be considered to be ‘active’ as described.
This paper raises the possibility that virus particles can be produced by the vibration of cells at certain frequencies which create hexagonal structures similar to the alleged shape of the virus. Reinterpretation: scalar waves produced as artefacts of the 5G antenna are responsible for the creation of exosomes.
Sunspots and covid
Revealing the relationship between solar activity and COVID-19 and forecasting of possible future viruses using multi-step autoregression (MSAR) – Nasirpour et al https://pmc.ncbi.nlm.nih.gov/articles/PMC7961325
“Regarding the results of this study, we found that sunspots are the main cause of virus generation in the world.
This research reveals that the biological and astrophysical mechanisms are related to the generation of world pandemics such as COVID-19.”
Anti-Aging by longitudinal magnetic waves: A new approach by modulating ATP-Levels – Schnabl, Schnabl, Meyl
Introduction of scalar waves into plants slows ageing via the promotion of ATP production. ‘Imprinting’ of the waves with different plant sources shows variable results indicating the transfer of ‘information’ from one to the other.
The input of electromagnetic energy has had a direct causal effect on cellular processes.
Fred Hoyle and Chandra Wickramasinghe examined the epidemioloy of flu and decided that:
a) It was not contagious b) It was somehow linked to meteorological events c) It could be described as being caused by ‘virions’ from space
A virion according to the authors is some sort of particle that can enter the population and cause the production of flu ‘virus’ within the body. This resulting virus would lead to disease but was not transmissible person to person; only the original virion can cause disease and this comes from space.
The virion is now clearly a field vortex which explains:
Why the flu is not contagious
Why they never found such a thing
How it is produced not in space but in the atmosphere
Association with weather events
The thing that causes the disease (virion-vortex) is not the same as the thing used for diagnosis (exosome-virus)
If influenza is something to do with an excess of field vortices and the mechanism of grounding is the expulsion of field vortices then it might just be the case that an acute state of influenza could be alleviated or even cured by the practice of grounding (earthing).
From the paper: “Earthing or grounding might have significant impact in the management of patients with COVID-19. The main complications of the illness are hypoxemia, coagulopathy, inflammation, and immune defect which endanger the patient’s life. All these complications could be relieved by earthing without the deleterious side effects of the drugs and without cost as well.
It is advised to perform sufficient and regular earthing for at least 40 minutes daily for prevention or treatment of COVID-19 infection. Earthing could be applied at any stage of the illness although earlier is better. It is more effective when conducted on wet muddy earth type. It might be effective for other respiratory viral infections such as influenza.”
Reiteration
Solar activity creates powerful electromagnetic activity which is transmitted to the Earth via helical filaments and impacts upon our ionosphere to somehow result in characteristic seasonal and latitudinal patterns in our atmosphere. The resulting vortices enter the human body where they cause all sorts of trouble but are eventually ejected or neutralised via exosome production or other methods.
The observations of the virologists are not ignored or invalidated but given a new interpretation in terms of scalar wave activity. Such activity being something they can never observe directly as they don’t have the instruments, and are unlikely to infer as it involves theories of electromagnetism which lie outside of most text books.
The original claims of the no-virus movement still hold and a putative mechanism has now been suggested to account for exosome production and its immediate connection to meteorological events.
Postscript: Construction via vortex
Exosomes are described above as arising within the cell from cellular processes whereupon they migrate outwards across the cell membrane.
Considerations of a biological system as being dominated by an electrical bio-field comprising a collection of vortices arranged in a cellular fashion, however, allows for a slightly different interpretation.
In this formulation, a cell is coincident with a vortex field with the nucleus at the centre. Energy accumulates both at the centre and the periphery of the cell leading to much activity at both these places.
Vortex movement at the edge of the cell encounters contra-rotating fields from the other cells and smaller vortices form. The energy from these vortices is used to create the main body of the exosome and further vortex ‘turbulence’ around the particle now forms even smaller vortices which form the familiar ‘corona’.
In this version of events, the formation of exosomes (viruses?) is very closely related to the laws of electromagnetism, with the familiar shapes arising from the natural organisation of electric fields as opposed to a teleological aim of the cell itself.
The cell isn’t packing up resources as it deteriorates and nor is it responding to instructions from a virus. All that is happening is that too much or mis-managed energy has resulted in the formation of out-of-place vortices which will tend naturally to construct biological artefacts from available material.
This should not be too surprising because this is the basic principle of Life itself!
Electrical voltage is commonly thought of by analogy with water in a pipe as a sort of electrical pressure that pumps electrons through a wire. This page puts the argument that a better explanation is to be found by imagining a river flowing downstream pulled by the force of gravity.
Water can be made to flow in at least two ways:
Pressure ‘pushes’ the water from behind. Energy is transferred along the pipe from some source to each section of water which is pushed along by a physical force. Water moves down the pressure gradient from high to low pressure. Removal of the pressure source causes all the flow to stop, with the effect moving at the speed of sound.
Gravity pulls water downstream. In this case each section of water is moved by local forces only and the behaviour of upstream water is irrelevant to the downstream flow. Flow can be stopped by damming up the river but the ‘effect’ here travels downstream at the speed of the flow as opposed to the speed of sound.
There are subtle differences between these two, but which is more appropriate for the flow of electricity? The first is the commonly described mechanism but the second is more consistent with reality.
The pressure analogy
A search for “what is voltage Harvard science” gets me the following results:
“The force that makes electricity flow; the unit of measure of electric potential” – Harvard Electricity Policy Group – voltage is a force?
“Voltage is the force that makes electrons flow, or the difference in potential energy between two points in a circuit.” – ambiguous: is it a local force or a global difference?
“Voltage is measured in volts with a voltmeter.” – voltmeters actually measure current; voltage is calculated not measured.
“Voltage is the pressure from a power source that pushes charged electrons through a conducting loop.” – No, see here: What is electricity?
“The force of an electrical current that is measured in volts” – Encyclopaedia Brittanica
This is about as confused as it gets. Is voltage a local ‘force’ that pushes electrons around or is it some pressure difference between two distant points? Does a local force arise from a pressure difference and if so, how?
Voltage is not measured in the same units as force and is hence not a force.
The last in the list reads: “Voltage is the pressure from a power source..” which implies that there is such a thing as (electrical) ‘pressure’ which originates from a ‘source’ and can presumably be transmitted along a wire to affect distant particles. This really needs some justification.
Such statements are easily accepted as true whilst we have in our minds the analogy of water pressure, but such an analogy is simply not valid. In the case of water we know that there is such a thing as ‘pressure’ as we can measure such a thing; a small balloon can be inserted into a pipe and the pressure measured at any place in the flow to give a local pressure reading.
The same procedure cannot be carried out with voltage and this is reinforced by many sources describing voltage as a potential difference between two points. Voltage cannot be measured and isn’t even defined at a single point, only as a difference between two points.
Again, this sounds reasonable until you try to think about it: if voltage is always expressed as a difference then what are the quantities that give rise to such a difference and how do we know they exist? After all we only ever measure one quantity and we conclude the existence of two!
A voltmeter only ever gives one reading and only ever measures what happens inside the voltmeter anyhow; everything else is inferred via the theory.
The flow analogy
In the case of a river flowing downstream, there is no pressure pushing the water from behind, no impact of upstream water on downstream flow. Instead the water moves according to only local (gravitational forces) with each segment having its own ‘power supply’. Local pressure is actually measurable here and remains uniform, close to atmospheric pressure and has little influence on the flow.
Imagine, then, if we did not know about gravity but merely observed ‘flow’ with no apparent driving force; what a tangled web of reasoning we would weave!
The default view of the universe is one of stasis or maybe uniform movement until some force perturbs this and causes a change in such a state. This view has been applied to electricity and the idea adopted that there can be no energy transfer without a motivational force. The resulting framework is a muddled mess.
Hypothesis: Electricity is the flow of electrical field vortices as described by Konstantin Meyl. They have their own intrinsic energy and are hence capable of self-propulsion through a conductive medium. Movement is according to local forces only. They are shaped like ring vortices in electrical wires with most energy carried in the insulating cable. What is electricity?
The vortices carry their energy from A to B and it is the same energy that is used to self-propel along the wire. ‘Resistance’ leads to a loss of energy transmitted and this is dissipated as ‘heat’.
Voltmeters only measure current (flow) and a loss of vortex energy along a wire will be interpreted as a voltage gradient.
Moving electrons have nothing to do with this and are never measured.
Resistors in series
Placing a resistor in a circuit is akin to placing a dam in a stream. Pressure builds up and some water is lost through evaporation or heat in the case of electricity. Overall flow is thus reduced in proportion to energy loss. Electric vortices build up an analogous pressure when entering a resistive material.
Measuring voltage across different resistors gives the expected result. A voltmeter has a large resistance and current will prefer to flow through a bare wire, but place a voltmeter across a resistor and the current will develop a preference for flowing through the voltmeter, leading to increased current within the meter and a consequent increase in the measured voltage.
The voltage has actually been created by the resistance. The voltage is not driving the current but is a consequence of its impedance, much as the pressure behind a dam is the consequence of the flow and not the cause of it.
So which is it?
Tricky. The two theories give similar predictions for different reasons but there are some differences:
Flow geometry. The idea that energy flows in ring vortices predicts certain preferential flow geometries with the magnetic component of the flow thriving in the insulator surrounding a wire and the electrical component moving within the copper core.
This is supported (What is electricity?) by the observations that under-sea cables with too thin an insulating sheath did not perform well at all and by the discovery that the conduction of nerve signals improves with the thickness the myelin sheath surrounding the nerve.
Circular currents. Circular or toroidal currents are reported in living systems (e.g. around the red blood cells) and also in space where they can span distances of many light years (Thunderbolts Project).
This is consistent and natural with vortex theory where the laws of electromagnetism mandate movement of electric fields at right angles to the magnetic component. The field moves under its own steam with helices and toruses being the order of the day.
But how does this happen where a voltage is required to move the electrons and where there is no copper conductor to guide the current? What is the voltage measured from start to finish of this current? On the one hand it must be fairy large to push the current all the way around but on the other the start point is the same as the end point and so the start and end voltages are the same! There is therefore zero voltage drop!
Wikipedia
Wikipedia gives a slightly different definition of voltage. We have:
“Voltage, also known as (electrical) potential difference, electric pressure, or electric tension is the difference in electric potential between two points“
“Electric potential is defined as the amount of work/energy needed per unit of electric charge to move the charge from a reference point to a specific point in an electric field.”
So the idea of ‘pressure’ is preserved but is somewhat contradicted by the second sentence which introduces the idea of movement between two points and the energy required to accomplish this. Electrical ‘pressure’ is not defined at a single point as with water but only by some movement between two points , one of which is arbitrary.
The idea of an electric field has been introduced and voltage is ‘work’ needed to move a charge within that field. This is interesting because previous definitions have defined voltage as the motivational force for moving the charge (electrons) in the first place whereas here it seems to be defined by some other external (hypothetical) force moving the charge against the field gradient.
A charge will move within an electric field anyhow because of electric forces but this does not seem to constitute ‘voltage’.
So which is it? Is voltage a motivational electric field or is it the effort to move charge against such a field.
Further reading does nothing to clarify matters and only adds to the confusion. The Talk Page makes matters even worse with everybody having different opinions on what is going on but with some agreement on the following statement:
“This article is excessively technical. While all the math is relevant in a higher-level mathematical context, to the everyday reader, this article is almost completely useless.“
The river flow analogy revisited
Consider the idea of moving a wooden log upstream as an analogy of moving a charged particle. It requires a degree of ‘work’, of ‘energy’ as the movement opposes the flow and resistance is encountered.
The resistance and the energy required depend almost entirely on the flow characteristics, i.e. the current. In a lowland river, the work required is hardly a reflection of the height difference and nothing to do with any pressure drop.
Try to drag a log through water in a pipe with pressure driven flow and the resistance met is again entirely dependent upon current and has little to do with moving against the pressure.
Try reading Wikipedia again and it seems obvious now that they are using an inconsistent model and have needed to resort to increasingly abstract and unfalsifiable concepts (potential difference, reference points at infinity) to try to patch things together.
‘Potential’ at a point isn’t directly measurable in principle or practice; it has no absolute value at any point in space and cannot therefore be said to represent anything ‘real’. It only exists between separated points and so any real effect attributed to such a quantity is really a case of action at a distance.
So what is voltage?
If we start with something measurable and observable then we should begin by saying:
“Voltage is a measurement of current when a high resistance meter (voltmeter) is placed in parallel to a section of circuit”
This makes it absolutely clear that what is being measured is current, not ‘potential’ and raise the important question as to why different amounts of current will be captured at different parts of the circuit.
For an answer, imagine drawing some current out of a stream by a piece of pipe. The wider the pipe the more current flows through it.
Odd effects will be had when placing the pipe parallel to a dam (resistance); some pressure has indeed built up and this will lead to an increased measurement of current within the pipe, but again, the pressure has been created by the flow, not the flow by the pressure.
Voltage is therefore not at all fundamental but an emergent property of flow characteristics, conductive properties, circuit geometry and voltmeters.
Electricians
Electricians will say that volts is what is measured by a voltmeter and that it is related to current and resistance via the following equation:
Voltage = Current x Resistance
This is fine and will give good results in practice because voltmeters, resistors and ammeters are calibrated in a way so as to give the required answer. All that is ever measured is current within a meter of some sort with voltage and resistance being inferred (calculated) but never directly measured.
The practicality of the equation gives no indication as to the underlying physical mechanism; it does not prove that voltage is fundamental or even pressure-like in nature. The fact that the equation is so reliable means that there is ‘some’ interesting phenomenon at work. The properties emerging from the fundamental physical laws have characteristic behaviours that are repeatable and measurable and it is these which give the impression of being themselves fundamental.
There is no concept of ‘pressure’ at all in the above equation; the whole idea is a matter of interpretation only.
Relevance to blood flow
Popular imagination has it that the blood is pumped around the body by a vast pressure difference created by the heart, but anyone who researches this seriously comes to the conclusion that this is simply not true and that the blood moves around ‘by some other means’ or ‘by means of its own energy’.
The nature of this energy is hitherto unknown but we can take a good guess now that it involves energised ring vortices which move the blood through the arteries according to their own power. Se here: Blood flow and scalar waves
Several researchers have noted that the blood will consistently flow from low to high pressure in the aorta. This is inconsistent with being moved by external pressure (as with ‘voltage’) but entirely consistent with the flow being created by the blood itself in a similar manner to electric currents described above.
Blood propels itself down the aorta before encountering the smaller arterioles. These provide a greater impedance to the blood and so some pressure will build up as with a resistance in an electric circuit.
Again we find that the pressure is caused by the dynamics of the flow and not the other way around.
Summary
A coherent definition of voltage is not easy to find.
The idea of electric current flowing under its own energy seems a likely explanation for the observed phenomena and is supported by the Field Theory of Konstantin Meyl. (Scalar Waves: a first Tesla physics handbook)
Without this electrical ‘vitalism’, there is no originating force in electromagnetic theory and everything must derive from essentially static entities. Electrons are stationary unless moved by the gradients of static fields, but static fields themselves are the product of .. electrons!
Something else must be in play to organise these first charges into a field but the only thing to organise electrons is .. another field! We are in a closed loop without any originating cause or fundamental organising principle.
The idea of an ever moving electric vortex field solves these philosophical problems. It gives a primal cause as consisting of field movement which is distributed throughout the universe and a basic organising principle of vortex flow mandated by internal forces.
Field vortices prefer to move through conductive materials and will self-organise according to local conditions to provide the illusions of electrical pressure, potential, field gradient, current and voltage.
‘Resistance’ is created by local field variations which destabilise the vortices to result in either heat exchange or photon emission with a concomitant loss of energy.
Theodore Schwenk 1910-1986 showed that shaking water on different days would somehow imprint that water with a record of cosmic events and that this would affect the germination of seedlings immersed in such water. Water shaken during an eclipse had an especially deleterious effect on the growth of the plants.
From his book, “Sensitive chaos”:
“As a striking example among the abundance of constellations in the heavens we will choose an experiment made on the day of a total eclipse of the sun.
“During the course of the day at regular intervals – say every quarter of an hour – a different vessel filled with water is shaken for a short time. Each time this is done a kind of sense organ, which closes again when the movement ceases, is opened to the momentary happenings in the heavens. And each time a somewhat different situation in the universe is imprinted upon the water: the gradual movement of the moon towards the sun, the commencement of the eclipse, the totality of the eclipse and the gradual movement of the moon away from the sun.
“At the end of a series of experiments like this, the whole course of the happenings of the day is contained in the row of bottles that have been shaken.
“How can this be made visible?”
“There are various methods, of which we have chosen the following: in the water of each vessel grains of wheat are caused to germinate: this can be done days or weeks later, as long as the water has not been disturbed anew. The grains of wheat are all placed in the water at the same time and under the same experimental conditions; the effect of the impressions which permeate the water will be seen in the growth of the blades.
“During the same span of time the blade in one vessel of water will grow better than that in another. The lengths of the different blades in the different vessels will depict the course of the eclipse. The blades in the water shaken at the time of the total eclipse do not grow as high as those in water shaken before or after.
“A graph can be made of the lengths of the blades in the consecutive of the water and the so-called growth curve results. Other planetary constellations will give growth curves of a different character.”
The shroud of Turin is a piece of linen cloth with the face of a man somehow ‘imprinted’ upon it. Whatever the source of the cloth or the identity of the figure, the nature of the image needs some explanation. It is not comprised of dye or paint but seems to be caused by overheating and hence scorching of individual fibres.
The image below shows a close up image of some of the discoloured fibres from a John Campbell video.
John explains:
This is a close up of the nose of the image
The yellowed fibres form the building blocks of the image
The yellow colouration is only one fibre (500 nm) deep
Yellow fibres sit adjacent to white fibres
Surface fibres are the most affected
As the yellow fibres dip beneath the next thread, they become white
The effect does not appear to be formed by pigmentation and the most popular explanation is that the fibres have somehow been overheated by a brief but intense burst of radiation.
This seems problematic:
What were the causes of this radiation?
Why are adjacent fibres treated differently
Why the evenness of the image?
Why only discolouration at contact points?
The phantom leaf effect
A leaf was placed between two pieces of Perspex and photographed. Pieces were removed (centre and top right) but when the leaf was photographed again under an applied magnetic field, the outline of the missing pieces was still visible. More: The phantom leaf effect
The explanation given by Konstantin Meyl in his book “Scalar Waves” is that tiny vortices of magnetic energy have been formed in the living leaf and have moved to the insulating material of the Perspex wherever there is contact between the two substances.
These vortices form a persistent and stable ‘field’ within the plastic layers in a shape reflecting the contact points of the leaf.
When a magnetic field is later applied to the arrangement, energy is discharged in patterns guided by this persistent ‘morphic’ field and this is what appears in the photograph.
The shroud: a hypothesis
The cloth is laid over the body and electromagnetic vortices travel from the body into the fibres of the cloth at the contact points, possibly aided by sweat at the surface of the skin. The fibres of the cloth are tubular in shape and therefore encourage the formation of ring vortices which may travel along the structure.
These ring vortices are now in a position to absorb more energy from the environment, including from ambient heat, from the infrared in sunlight and from the solar neutrino stream.
If a sudden increase in energy input were to occur then we may expect highly localised heating along the length of individual fibres and if there is enough energy then the heat resulting from this may be enough to discolour the linen on a strand by strand basis.
Contact between body and cloth may be light and so some parts are affected but not others and in this way we can see disparity in colouration for some length along adjacent fibres of cloth at the finest magnification.
A sudden influx of energy
The ring vortices in the cloth are capable of absorption of neutrinos from the solar stream which will help to maintain them within the cloth and even to allow propagation a small way along the fibres.
Now from Meyl, we discover that during an eclipse, the solar neutrino stream is focused by the moon to specific places on Earth and will increase in intensity by a factor of up to 20,000.
Konstantin Meyl: Scalar Waves (First Tesla physics textbook)
This sudden increase in energy is thought to be linked to many events on Earth including plagues, mental disturbance and earthquakes, with the neutrino stream becoming powerful enough to move the tectonic plates. See: Neutrinos, eclipses and plagues.
Meyl is claiming that the energy and resulting heating effect are sufficient to cause spontaneous combustion in hayricks and so certainly there is enough energy to cause burning or oxidation in a piece of linen.
Energy is absorbed by individual ring vortices within the linen strands, is transmitted along the fibre to other existing vortices and when a critical level is reached is shed in the form of ‘heat’ which then promotes a highly localised chemical reaction resulting in the discolouration observed.
Eclipses at the time of the crucifixion?
Opinions differ of course but there are plenty of claims that not only was there an eclipse at the time of the crucifixion but many earthquakes as well. Note that according to the solar neutrino absorption theory, both these are now physically linked.
The Darkness Over Jesus’ Resurrection Was Real and Seen By Many – Reddit:
“Thallus, in the third book of his histories, explains away the darkness as an eclipse of the sun—unreasonably as it seems to me.” – Julius Africanus
“On the whole world there pressed a most fearful darkness; and the rocks were rent by an earthquake” – Africanus
“(there was) the greatest eclipse of the sun’ and that ‘it became night in the sixth hour of the day[i.e., noon] so that stars even appeared in the heavens.There was a great earthquake in Bithynia, and many things were overturned in Nicaea.” – Phlegon
“on a certain sixth day, and about the sixth hour, the sun, to our great surprise, became obscured, through the moon passing over it. For, when the whole orb had been throughout darkened, by a black mist of darkness, and the sun’s disk had begun again to be purged and to shine anew…” – Dionysus
Questions answered
We are now in a position to address the questions posed above:
Q: Why is the shape of the body reflected in the discolouration? A: Because the vortices form only at points of contact.
Q: Why is the image only one fibre deep? A: Because contact is at the surface and ring vortices will travel along the strands in a similar manner to a fibre-optic cable, but not between them.
Q: Why is the discolouration longitudinally uniform A: Vortices tend to self-organise by moving along the fibre and spacing themselves out evenly
Q: Why does the radiation not scorch the whole cloth? A: Neutrinos will normally pass right through ‘matter’ and indeed can be measured coming out from the Earth’s surface, having passed right through the planet. They are, however, absorbed by vortices of an electromagnetic nature and are hypothesised to be the mechanism behind the phenomenon of hydro-luminescence. In the shroud then, heating is localised to a microscopic volume of linen, that inhabited by the vortices.
The opening words of Genesis and John seem at first sight, impenetrable, mystical and self-referential. This page tries to show that they can be viewed as a short treatise on how to structure ‘knowledge’ in the general case and the equations of vortex physics as a specific example.
John 1:1 “In the beginning was the Word (logos), and the Word was with God, and the Word was God.”
Logos: “The Word of God, or principle of divine reason and creative order”
Wikipedia: “The word derives from a Proto-Indo-European root, *leǵ-, which can have the meanings “I put in order, arrange, gather, choose, count, reckon, discern, say, speak””
So in the beginning was an ordering principle that would be at the absolute heart of reality, a universal law that would apply to every action on Earth and every movement in the cosmos for all eternity.
The Meyl field equation
The best way that modern science has of describing physical reality is the field equation below from Konstantin Meyl:
The behaviour of all matter and energy is described in a single equation. Everything is a manifestation of a single energy field, ‘E‘, and there is no true separation between matter, energy and information.
The equation is not ‘real’ itself but only represents an ideal, an ordering principle to give form and structure to a field of an electromagnetic nature.
Since it is not real it cannot be said to ‘exist’ in time or space and cannot even be thought of as ‘being’ prior to the universe but instead is clearly eternal, ineffable and absolute.
Every atom, photon or energy structure obeys this law and there are no exceptions, no other laws or ‘alternative ways’ of looking at things. Every biological event and every freely made conscious decision is subject to this single physical principle.
In the beginning..
“In the beginning God created the heaven and the earth.” – Genesis 1:1
There are of course many interpretations of this sentence the one that springs to mind easily is that from the originating principle springs two interlocked realms, the one Earthly, temporal, material and physical and the other of an altogether more insubstantial nature consisting of energy, mind and ‘ideal’ forms rather than physical structures.
The Meyl equation decomposes into several terms and this leads in practice to physical structures of two distinct types:
Electrical eddy currents
Magnetic potential vortices
The electric eddy currents tend to form the stable structures of physical reality such as electrons and atoms, whilst the magnetic vortices cooperate to form an energetic and informational realm comparable to Steiner’s description of the etheric body; a vitalistic non-material entity that directs, and gives life to, the otherwise inanimate Earthly body.
Magnetic potential vortices comprise the medium of choice for the transmission of nervous signals and a large scalar wave complex is hosted in the physical brain, being responsible for all of our thoughts, decisions and feelings.
Creation or instantiation?
“In the beginning God created the heaven and the earth.” – Genesis 1:1
The idea of ‘creation’ does tend to suggest a single event before which there was nothing and after which the job was done with the Logos now being surplus to requirements. However, the text is reportedly subject to other interpretations:
“Some scholars argue that “בְּרֵאשִׁית” (bereishit) can imply an ongoing action or a construct state, which would make it dependent on the following clause. Thus, it translates more like “When God began to create the heavens and the earth,” suggesting an ongoing process rather than a completed action.”
“This alternative translation is influenced by the way the Hebrew Bible sometimes uses construct forms, where “in the beginning of” would lead into a subordinate clause, suggesting that the act of creation was a process rather than a singular event.” – Reddit post
In the case of a universe obeying the field equation we cannot even say that the equation is responsible for ‘creating’ or for that matter ‘doing’ anything. It isn’t real, it is outside of the real world and hence can’t initiate any action of any sort but still all activity without exception must pay absolute obeisance to this most fundamental of laws.
Any act of creation is really one of ‘instantiation’; the ideal is manifest as physical, the abstract has a physical instance created in its own image.
So Genesis might begin: “Heaven and Earth are coming into existence in accordance with the eternal and immutable patterns of the Logos”.
I am that I am
Moses asks the name of the Logos and receives a somewhat cryptic reply variously translated as:
“I am that I am”
“I will become what I choose to become”
“I create what(ever) I create”
“I am the Existing One“
Very nice.
Examine the field equation of Meyl and ask what exactly is the field? How is it defined?
The field ‘E‘ is defined here in terms of its own properties. The equation is self-referential and it makes no reference to any other entity or equation. It is self-contained; it is what it is!
Many will complain that a self-referential definition is no definition at all, but how else to structure a theoretical framework? If we define an entity in terms of another entity then we require a definition for that other entity in terms of even more entities and so forth ad infinitum.
The equation therefore can only describe its own properties and nothing else. This makes perfect sense and in fact is the only reasonable option.
Something must ‘pre-exist’, there must be a ‘first’ definition. However, ‘first’ and ‘pre-exist’ are the wrong words since although the equations themselves may exist as real entities in time and space, the concept (logos) behind them certainly does not; it has no beginning and no end.
The logos does not pre-exist as such because it never came into being at any point in time but is eternal and immutable; it is “The existing one”. The odd character of the language arises not from a desire to sound mystical or even as an artefact of translation but rather from the difficulty of describing such an abstract concept in terms of everyday language.
I create what I create
It is quite possible that the Universe started out as almost nothing at all and that all matter and energy have arisen since then. It is possible that there was a time when prior to that there was nothing and after that there was ‘something’.
Another possibility is that there was always ‘something’ but that it was incomplete and that more matter and energy are in a continual process of manifestation; they are coming into existence in accordance with the field equations.
Either way there is an ongoing process of creation.
All creation must be in accordance with the rules of the field equations (the logos) and since an imperative for ‘movement’ is built into these rules, the specific instance of the Universe that we currently inhabit can be said to be bringing itself into creation at every point in space and time, everywhere and all the time.
The equations themselves do not actually ‘create’ but the Universe acting in their image is in a continual act of creation. The Universe is not separate from this act of creation but is in fact synonymous with it.
The Universe is its own creator then, but the ‘organising principle’ of such creation lies outside of the scope of the Universe itself.
“All things were made by (in accordance with) him; and without him was not any thing made that was made.” – John 1:3
Once again the language from the Bible seems cryptic but attempts to explain in plainer terms don’t improve the matter much.
Beings of light?
“Angels are beings of light, and are often described as messengers of God who deliver his message to humanity” – Google AI
As mentioned, the field equations have resulted in a world that is partly ‘physical’ (consisting of stable electric eddy-currents) and partly informational-energetic (formed by vortices of magnetic potential). However there is another solution which lies between the two which is symmetric in electric and magnetic activity and that is the electromagnetic wave or photon.
Photons are manifest as light or as radio waves. They are the medium of choice for electromagnetic signalling and can therefore be justifiably be described as ‘messengers’ and ‘beings of light’, maybe, but there is not a great deal of information in a photon by itself and biological systems will prefer to use magnetic scalar waves almost exclusively.
Bio-photons are not really photons but scalar waves that are measured as photons because of the primitive nature of electronic measuring instruments.
The breath of God
“And the earth was without form, and void; and darkness was upon the face of the deep. And the Spirit [Hebrew ruach] of God moved upon the face of the waters.” – Genesis 1:2
“Ruach can also mean “breath, air or wind.” In Job 38, there is the striking image of God’s speaking to Job out of the “whirlwind”“. – Answers from Genesis
Given the general theme of the rest of this website (vortex physics) it is very striking indeed that a vortex should be associated in any way with the act of creation and that such an emphasis should be put upon the surface of the sea and its relationship to the movement of air.
Newgrange and the Land of Khem
The image below is of the kerbstone at the Newgrange Neolithic site in Ireland. We see waves of water at the bottom and at the top right some sort of causal impulse resembling a pressure front pushing the air over the waters. The wind forms quite prominent vortices which are emphasised above all else and the result is some crystalline looking substance at the right.
The Land of Chem YouTube channel theorises that this is an instruction manual for the creation of some sort of chemical used in agriculture; some sort of fertiliser product. Input of air from the entrance shown flows over the surface of the water and hastens a bio-chemical reaction which produces the crystals shown.
The vortex physics of Meyl and Lanka
This scheme is entirely consistent with the vortex physics of Konstantin Meyl where the interface between water and air is a propitious environment for the generation of energetic field vortices. These act as energy accumulators and are instrumental in a variety of molecular interactions including the manifestation (creation) of matter from energy and in addition, the transmutation of one element into another and the actual splitting of water molecules into their constituent atoms.
Meyl describes the evaporation of water:
“The water surface facilitates the formation of potential vortices, because of the high permittivity. Thereby the vortices absorb single water molecules and carry them away.” – Konstantin Meyl: From Vortex Physics to the World Equation.
Stefan Lanka identifies membrane water with Pi water, has stressed the importance of vortices at the surface and claims that all life originates from such phenomena:
“There is a substance that is a fundamental part of the realization of life and that is the so-called “water membrane” or surface tension of water.” – Stefan Lanka
“This membrane, which is wrongly defined as the 4th phase of water, is the substance that water itself creates when it comes into contact with gases, solid surfaces, dissolved substances or when whirling movements are generated.” – Lanka
“Surface formation from the liquid water creates energy and all things and structures emerge from it. “Water is life “ is a true statement.” – Lanka
“Let there be light”
“And God said, Let there be light: and there was light.” – Genesis 1:2
One way to understand the early Universe is to look out into space and see what is happening there. Vast glowing clouds of plasma exist spanning many light years and often forming helical structures.
These are assumed by the Electric Universe paradigm to consist of clouds of charged hydrogen plasma which can, under some circumstances, emit photons thereby making them visible from Earth. The movement of the plasma constitutes ‘current’ in accordance with standard theories of electromagnetism. But see: What is electricity?
An better explanation then is that the electric fields came first and everything else emerged from them. Fields form natural vortices which explains the shapes we see and the ‘currents’ within can stabilise to form ‘matter’ in the form of electrons or hydrogen ions (Meyl – Scalar waves) or can ‘unroll’ to form the familiar photon streams which then propagate at the speed of light all around the galaxy.
So the order of creation here is roughly:
Logos → Spirit (scalar waves) → Light → Matter
The electric fields precede the creation of matter (not the other way around). Matter is combined with spirit under the guidance of the Logos and the result is ‘Life’.
“All things were made by him; and without him was not any thing made that was made.” – John 1:3
Adam was created from clay?
“Then the LORD God formed [that is, created the body of] man from the dust of the ground, and breathed into his nostrils the breath of life; and the man became a living being [an individual complete in body and spirit].” – Genesis 2:7 (Amplified Bible)
Now if the ‘breath of life‘ is identified as the field vortices or eddy currents of Meyl’s vortex physics then this makes some sense. Field vortices constitute both energy and information and the sense of smell is the most basic of the senses that connect us to the outside world.
Vortices carrying the information relating to scent will travel up the nose and are conducted first along the antenna hairs in the nostrils and then along the olfactory nerve to be integrated into the scalar wave complex of the brain: Scalar waves and nerves
Very likely the cilia on the surface of single-celled organisms perform the same function. This particular form of sensory perception therefore was present in very early life forms indeed.
An individual is not ‘complete in body and spirit‘ if they have no ‘spirit‘ (scalar wave bio-field) and similarly cannot be called ‘complete‘ without any connection to the outside world.
“God did not make a body and put a soul into it like a letter into an envelope of dust; rather he formed man’s body from the dust, then, by breathing divine breath into it, he made the body of dust live, i.e. the dust did not embody a soul, but it became” a soul – a whole creature.” – Wikipedia
“In him was life; and the life was the light of men.” – John 1:4
Breath as vortex energy
In early organisms it may well be that the input of vortices from the environment represented a significant and necessary source, not only of information but also of energy. Scientists report enhanced activity on microscope slides when the micro-organisms are exposed to increased levels of laboratory lighting or ambient heating.
This doesn’t apply only to primitive organisms according to Gerald Pollack:
“I suggest that it’s not oxygen gas that our bodies require, but electrons drawn from that oxygen. That is, no gas flows from alveoli to capillaries, only electrons extracted from the oxygen gas.” – Pollack
Maybe consider that no matter at all needs to pass through the lungs, but field vortices only. The divine breath is now pure energy with no material substance necessary.
But .. clay?
Dutch physicists Geesink and Meijer found that phyllosilicate clays are constructed in layers that act as resonance chambers, trapping trap incoming radiation (light, infrared), allowing it to accumulate and then releasing it as a new set of frequencies: The Origins of Life
These frequencies are shown to be experimentally very close to those maintained by living cells and higher order organisms and most likely form the substrate of the whole bio-field system; they play host to the ‘breath of life’ or ‘spirit’.
Without this system we are just inanimate matter.
Life sustained with clay: a shrimp living in the dark
The following is from Louis Kervran’s book: Biological transmutations:
“In a medium entirely composed of humid clay, life is possible. The humid clay is compact, thus air cannot enter. This clay is impermeable to carbon and oxygen. Oxygen can, nonetheless, be produced by dissociation of the water molecule.
“Let us note the case of the Niphargus shrimp, a small animal half-an-inch in length that lives in -the clay of caves. If a shrimp is given organic matter (meat, etc.), it vegetates and dies. It also dies if it is not kept in humid clay. Experiments have shown that it grows normally in pure clay to which nothing has been added.
“Research workers therefore thought that the shrimp lived on clay and nothing but clay, an impossibility according to the laws of biochemistry. Actually, it cannot live thus in clay alone, but this clay contains microorganisms which work for the shrimp, making vitamins, various mineral products, nitrogen, phosphorus, and calcium, etc.”
[Note: ‘Dissociation of the water molecule’ is also claimed by Konstantin Meyl; energy concentrates towards the centre of a vortex and trapped water molecules will be torn apart into hydrogen and monatomic oxygen.]
Healing the blind
“When he (Jesus) had thus spoken, he spat on the ground, and made clay of the spittle, and he anointed the eyes of the blind man with the clay,
“And said unto him, Go, wash in the pool of Siloam, (which is by interpretation, Sent.) He went his way therefore, and washed, and came seeing.” – John 9:6-7
The strong sunlight in that part of the world filters through the clay and emerges as the restricted set of frequencies discovered by Geesink and Meijer here: Healing frequencies.
The frequencies lie upon a twelve note Pythagorean scale and input of radiation of these frequencies seems invariably to benefit the recipient whereas the off-scale notes in between seem detrimental to organic life.
My best guess as to how this works is that the bio-field system of the human body uses a network of resonant scalar wave (Tesla wave) interactions as a communication system. The set of frequencies used is determined in part by the distance between the nodes of Ranvier on the nerve axons.
Input of electromagnetic waves at these frequencies will strengthen both the energetic and physical infrastructure whilst the off-scale notes will initially cause electromagnetic noise which could eventually lead to a deformation of the physical structure of the nerves.
This electrical system is therefore dependent in part on the physical structure of the nerves and conversely the physical structure is maintained by the electrical system. Physical damage will manifest as impaired communication.
Nerves can self-repair using the same Tesla waves as both a source of energy and a morphological template. If there is insufficient energy then repair fails, but an additional influx of scalar waves from frequency modulated sunlight can restart the process.
The Holy Trinity
The Father – the Logos, the field equations. Immaterial and eternal.
The Son – The physical realisation of the Logos. Temporal, physical, Earthly. The world of ‘things’ and people. Include Jesus as a ‘completed person’ or ‘perfected being’, an exemplar or teleological aim towards which the evolution of Humanity is directed one way or another.
The Holy Spirit – The ‘etheric’ (non-physical) informational system of Tesla waves that is responsible for all biological order including inheritance, evolution and consciousness.
The existence of a bio-field
“Before I formed you in the womb I knew you, before you were born I set you apart; I appointed you as a prophet to the nations.” – Jeremiah 1:5
If the ‘I’ here is the Logos then this can be interpreted as saying that the existence of the information required to create a person actually precedes the formation of the physical self. The sentence refers to some immaterial template that will be used to guide the physical assembly of matter during embryonic development.
The formation of a living being is not just a matter of chemical reactions. Development is according to a teleological plan, a guided evolution of form and function with its ultimate origins in an electromagnetic bio-field. See: Evolution and cognition
This bio-field template contains the information required to create, not just physical form, but all inherited behavioural patterns, preferences and temperament. These qualities are again teleological in nature and so a large part of a person’s destiny is dependent upon information that precedes embryonic development.
The prophet has been ‘appointed’ and this is ‘known’ to the Logos, the root source of all biological organisation.
Enzyme reactions are puzzling. Sometimes they react and sometimes they don’t. Reactions in a laboratory are different from reactions in vivo. The application of heat seems to speed up reactions. Sometimes acidity affects the reaction rate.
Enzyme reactions are puzzling:
Sometimes they react and sometimes they don’t.
Reactions in a laboratory are different from reactions in vivo.
The application of heat seems to speed up reactions.
Sometimes acidity affects the reaction rate.
The addition of a catalyst enables a reaction without affecting the catalyst
Hypothesised mechanisms include receptors, binding sites and catalysts but no description of their mechanisms is given in terms of any sort of fundamental laws; what are receptors made of for example?
For two molecules to bind together some long range attractive force is necessary to bring them near to each other along with sufficient energy to overcome some sort of potential energy barrier.
For a catalyst to work we again need some sort of long range influence or even energy exchange.
Mainstream theory of kinetic gases has molecules bumping into each other to supply the movement and energy but this doesn’t explain all the effects seen. Van der Waals forces arise from the fixed properties of the atomic structure and should not be varying with the environment.
Hypothesis:
Bio-molecules contain hexagonal ring structures which promote the construction of electromagnetic ring vortices. These act both as energy accumulators and magnetic dipoles and add an extra layer of complexity to molecular interaction.
Magnetic forces exert long range attraction, pulling molecules together and orienting them correctly. The potential barrier is overcome and the whole arrangement settles to a new, stable, low-energy state. There is likely some surplus energy now and this simply diffuses away into the general vortex matrix as ‘heat’ or maybe transduces to infrared light.
The application of heat to the system is a way of adding energy to these ring vortices and will speed up reactions in general. A catalyst is a way of introducing both extra energy and additional attractive forces into the reaction. Energy accumulates on the ring vortices of the catalyst and is used for the reaction. However, the molecule stays intact and the catalyst is not physically depleted but will continue to accumulate energy, to re-fuel for the next interaction.
Energy accumulation is in part from vortex transfer (heat), in part from infrared absorption and in part from the absorption of solar neutrinos. Neutrino density increases by a huge factor during eclipses and so the effects seen by Shnoll and Piccardi are now to be commonly expected; how else to explain these? Stirring a solution is a way of adding extra vortex energy by kinetic means.
If the body or cell can control energy input to the reaction then the speed and possibly the ‘nature’ of the reaction can be controlled on a highly localised basis.
This scheme adds an extra layer of complexity to the Van der Waals forces that is independent of such forces, decoupled from the atomic structure and whose strength varies over time according to both ambient conditions and cosmic cycles. The addition of magnetic dipoles seems to be an adequate explanation for the mechanism of the various receptors, inhibitors etc. and the variability of results according to environmental conditions.
If fundamental biological activity is affected by phases of the moon then what are the implications for human health? How does the body compensate for this?
Implications for ‘The Scientific Method’
Biologists have made certain observations then and managed to deduce that some structure they call a ‘receptor’ is somehow responsible and they have produced various classes of receptors and reaction types based upon this theory. They can’t describe a receptor in terms of basic physics and have no direct evidence for these entities, just observations of the assumed effects.
Nevertheless, it seems that some science can be done and conclusions can be drawn as the ‘receptor model’ is largely correct. It isn’t necessary to prove the ‘existence’ or ‘nature’ of the receptors if the observations conform to a consistent and predictable pattern.
Those in the field of biology can be forgiven in this instance for not describing their observations in terms of basic physics, as ‘basic physics’ is inadequate in this respect and that is the job of the physicists.
Revealing the relationship between solar activity and COVID-19 and forecasting of possible future viruses using multi-step autoregression (MSAR) – Nasirpour et al https://pmc.ncbi.nlm.nih.gov/articles/PMC7961325/
The authors believe in viruses but still conclude:
“Regarding the results of this study, we found that sunspots are the main cause of virus generation in the world.
This research reveals that the biological and astrophysical mechanisms are related to the generation of world pandemics such as COVID-19.“
How do sunspots cause disease?
First note that many of the disease outbreaks started before the sunspot maximum which tends to suggest that it is not the sunspots per se that are the cause of the outbreaks.
Mainstream wisdom is that sunspots originate from deep within the sun according to some internal process. We we may somehow be seeing the effects of this process at the Earth’s surface before they are visibly manifest on the Sun’s surface.
Other cosmologists see the solar cycles driven by external forces in the form of electromagnetic ‘galactic wave currents’ or some such. Similar vortex filaments between the Sun and the Earth are responsible for coupling events on Earth with those on the sun. Sometimes effects are seen first at the sun and sometimes they appear as disease on Earth before the solar cycles peak.
These electric currents between the Sun and Earth will have an impact upon our weather and electrical discharge from the ionosphere will disturb the regulatory systems of our bodies thus leading to diseases of an inflammatory nature. See: Influenza and weather
Connections between the sun and Earth are quite surprising, with statistical correlations between the Earth’s rotation rate and solar activity deep within the sun.
“Not only the minimums of the Earth’s rotation show connections with the solar activity period, but also, as Currie (1973) showed, the rotation rate of the Earth actually correlates with the solar activity!” – Attila Grandpierre
Grandpierre notes that sometimes the change in solar activity comes first and at others it is the Earth’s variations that seem to initiate activity in the sun!
So whatever the origin of these phenomena, it seems to have the power to both cause sunspots and affect the rotational speed of the Earth. From this perspective then, the idea that it could somehow be responsible for causing disease on Earth now seems a little less surprising.
The decline in disease
The chart below is from the Dissolving Illusions website and show the decline of most so called ‘infectious’ diseases from 1840 to 1976. All diseases shown were almost extinct before the mass production of penicillin in 1944 and certainly before the first vaccines in 1957.
The vaccines cannot therefore be responsible for the abolition of these diseases, which begs the question: What is responsible?
The answer according to many people now is that these diseases were caused by some sort of poisoning and that improvements in hygiene, sanitation and workplace conditions are what led to the dramatic decline shown.
This cannot be the whole story though.
The mortality rates for scarlet fever in particular show, not a steady decline, but instead huge variations which suddenly settle down circa 1900.
These variations have two outstanding features:
Magnitude: They are of a greater amplitude than the overall average decline
Periodicity: They show clear and regular cycles
These variations are of an order of magnitude that is actually greater over a 3 year period than is achieved in a hundred year average decline!
What is the explanation then for these short term variations? Improvements in hygiene now seem very unlikely; how to explain a coordinated nationwide predisposition for hand-washing that comes and goes every few years? How to explain any influence that has such a cyclic nature?
Sunspot cycles?
The peaks in the mortality rates for scarlet fever look to be between 5 and 6 years apart, i.e. half a sunspot cycle. Now given the strong association between other diseases and sunspots, why should it not be so that these cycles are also the result of solar influences?
Moreover, if such an explanation should be found sufficient for the larger variations in mortality then why is there any need of a separate explanation for the general decrease of mortality rates over the century?
The idea that the observed decline is nothing to do with toxicity and everything to do with sunspots will just seem like nonsense to many, but if it is supported by the data then it must at least be considered plausible.
This page takes a very brief look at some puzzling and important features of water and tries to rephrase them in terms of vortex physics. Elements include: interfacial (EZ) water, water memory, fourth phase water, ice formation, surface tension and tubular flow.
Interfacial (EZ) water
Where water meets another medium whether it be air or glass, say, there exists a zone of water that will exclude impurities, is claimed (and disputed) to be more dense than bulk water and will absorb electromagnetic infrared radiation at around 270 nanometres, the energy of which will be used to maintain such a layer.
In terms of vortex physics, what is happening is that the change in conductivity between water and the air necessitates a change in the structure of any electric field vortex that needs to pass across the interface. A build up of energy occurs at the surface of the water and this affects its physical structure.
The ‘etheric’ part of the water structures the physical.
Impurities are excluded from water adjacent to glass and the build up of energy at the surface of a body of water is what is responsible for surface tension. The molecular structure of the water is not what is responsible for the properties of the water so much as the increased ‘field energy’ caused by the vortex structure.
Physicist Martin Chaplin makes the claim that the H2O molecules in a block of ice are still behaving as if they were in liquid water, still forming temporary dynamic structures with no permanent molecular bonds. The solidity and macro structure of the ice persists independently of the molecular forces and therefore cannot depend upon them.
Gerald Pollack speculates that interfacial water adopts a fixed hexagonal crystalline structure but given the comments above this seems highly unlikely. None of the observed effects of interfacial water (density, absorption spectrum etc.) explicitly require a hexagonal structure.
Vortex structures are natural, ubiquitous and consistent with electromagnetic field theory and so these should be considered as somehow responsible for the restructuring of the water and for the consequent effects.
A field vortex is just a stationary electromagnetic wave and it is therefore natural that it should absorb other electromagnetic waves, meaning photons. No extra mechanism and no additional hypothesis concerning hexagonal water is needed to explain this.
Vortexed water
Gerald Pollack did an experiment:
Put water in a glass and measure the temperature
Stir the water
Measure the temperature again – colder!
So energy has been put into the water but the ‘temperature’ has gone down!
When temperature is taken with a thermometer, what is being measured is the rate of heat loss of the water i.e. the rate of transfer of energy from the water to the thermometer. The assumption of most scientists is that this rate of energy loss is proportional to the amount of energy within the water itself (black body radiation) and is therefore a good proxy for the energy content (temperature) of the water.
This is clearly not the case so we need to rethink.
What has happened is that the stirring of the water has created vortices in the water and this has affected the underlying electric field system within the liquid. This exists independently of the physical molecules and takes on the form of a collection of vortex structures. Quantum physicists call these ‘cold vortices’ or ‘quantum coherent domains’.
Stirring water then creates more of these vortices and moves energy from the kinetic movement of physical H2O molecules to the vortex domain where it is quite happy to stay for some time (days or weeks). Any energy stored in the movement of water molecules is subject to frictional losses and will either be expelled as infrared light, transferred via diffusion to adjacent molecules or moved into the underlying vortex structure.
Stirred water therefore contains quite a lot of stable energy and drinking the water will absorb this energy for later possible release into the body. In all probability this is beneficial for the health. A similar form of energy is breathed in through the lungs and consumed especially in raw food. Fritz Popp said that you would be ‘eating light’.
This result is surely vital in understanding the role of water in living systems. energy moves from the physical domain to the ‘etheric’ and back again. The vortex structures act as an invisible energy buffer to help with local energy management. Vortices also control to a great degree the physical structure of the water, its dynamic behaviour and very possibly determine what sort of enzyme reactions are permissible and at what rate they occur.
Fine control of the field system implies fine control of the intracellular environment and this is the essence of life.
Warm water freezes faster than cold?
Warm water may freeze faster than cold water .. but why? From the above, the temperature of water is equivalent to the rate at which water is losing energy but this is says little about the amount of energy remaining in the water.
If warm water is losing energy faster then it may seem obvious that it is going to freeze faster as the assumption is that ‘freezing’ is all about the loss of kinetic energy of the water molecules.
This may not be the case though. Martin Chaplin makes the claim that the H2O molecules in a block of ice are still behaving as if they were in liquid water, still forming temporary dynamic structures with no permanent molecular bonds. The solidity and macro structure of the ice persists independently of the molecular forces and therefore cannot depend upon them.
Freezing can occur at temperatures above 0° and water can remain liquid down to -80°.
Freezing is something to do with the energy ratio between the physical water molecules and the underlying energy field of the water. When the molecules are reduced in energy and the vortex field is high then the vortex field dominates and reduces the movement of molecules which then settle into a least-energy structure determined in part by their own physical structure and in part by the underlying ‘etheric’ body of the water; ice is formed.
This explains the effects of EMF on the formation of ice crystals; the field structure of water acts as an antenna for electromagnetic disturbances.
Water memory – Montagnier et al
To rephrase the famous water memory experiment in terms of vortex structures:
DNA and its accompanying spiral field structure is placed in a jar of water and stirred about a bit thereby creating vortices of both water molecules and of electromagnetic fields. A probe is inserted and some of the vortex energy moves through the water, through an interfacial layer of vortices next to the probe and transforms to a configuration appropriate to the conductive environment of the measuring instrument.
A frequency power spectrum is recorded and at a separate location is translated into an electromagnetic signal which is directed into another jar of water.
The instruments used will not just emit photons but also scalar waves as an artefact (Meyl). These, along with actual photons again navigate both bulk and interfacial water vortex systems, possibly changing their frequency profile along the way and likely changing from photons to scalar waves and back again.
Almost everything observed is an artefact of the experimental set up and the measuring process itself. It will be very hard to ascribe any certainty as to the meaning of the results.
EZ or ‘exclusion zone’ water
The volume of interfacial water next to a glass jar or even at the surface of the water has been labelled an exclusion zone by Gerald Pollack owing to its ability to exclude nano-particles and even solutes.
The explanation offered is that a layer of hexagonally structured water is formed and that the tight lattice structure is responsible for excluding all other molecules.
I want to propose an alternative idea which is that there forms a layer of tightly packed electrical field vortices which transfer their energy to the actual water to form vortices of moving molecules and it is these structures that will eject impurities.
Recall video footage of a tornado enveloping an entire house and sucking it up into the air. A mere gas has lifted something much denser than itself and moved it against gravity. The tornado is supplied energy from outside sources and the spiral structure concentrates that energy towards the centre naturally without further guidance.
Scientists in the Thunderbolts Project are adamant that this is primarily an electrical phenomenon driven by electrical currents and that the visible tornado is just a physical manifestation of such currents.
There is a precedent then for such phenomena. We don’t need to invent an extra structure or extra laws of physics, we can jus re-purpose existing observations and theory.
The sharp change in the permittivity when moving from one material to another leads to an accumulation of energy and the formation of a vortex layer which alters the physical properties of the water itself.
This also provides the only likely sounding explanation for the phenomenon of surface tension, the creation of a thin layer at the water-air interface that is so strong that a lizard can walk across it. Water molecular bonds are so dynamic that they cannot possibly be holding the whole thing together so, as with ice, it must be the underlying electric field structure that is providing the strength.
Dynamic electromagnetic field structures are always in the form of vortices by the laws of electromagnetism.
Konstantin Meyl
From his book “Scalar waves.. ” page 73.
“Something corresponding holds for a basic property of potential vortices, being of a completely different nature. It is the property to bind matter in the vortex and carry it away with the vortex. Well-known are the vortex rings that skilful cigarette smokers can blow in the air. Of course also non-smokers can produce these air eddies with their mouth but these remain invisible. Solely by the property of the vortex ring to bind the smoke it becomes visible to the human eye. “If our potential vortex transports something then it rather should be a dielectric material, so preferably water“
Tubular flow (Pollack)
Video footage shows nano-particles flowing through a nafion tube with no visible mechanism of propulsion. Application of infrared light speeds up the flow.
An exact description is elusive, but given the properties of field vortices outlined above, we have all the elements necessary to make a credible guess.
Infrared light is just another form of field movement with a semi-stable wave structure that propagates at the speed of light. This enters the water where it de-stabilises to some sort of scalar wave or ring vortex which travels through the water and through the wall of the nafion tube. At the interface with the water inside the tube, vortices are formed similar to tiny tornadoes, where they start to affect the structure of the water molecules themselves and expel the microparticles thereby forming an exclusion zone.
We can consider that in due course a larger vortex structure is formed around the tube and will tend to concentrate ambient heat (vortex energy) inwards towards the tube itself.
Such an influx of energy will need to go somewhere. However, we do not see dissipation and disorder as is commonly expected but an increase in order as a vortex system self-organises and again starts to move the water.
The self-organisation is of the electrical field and is therefore almost instantaneous. A preferred direction is decided upon fairly quickly and a stable flow of electric vortices will start to drag the water along with the expelled micro particles.
This arrangement predicts a continuous flow that can continue indefinitely as long as there is an input of energy in the form of either heat or radiation of some sort. Infrared seems preferred for some reason (resonant frequency of the vortices?).
The mechanism suggested by Gerald Pollack involves a continuous build up of EZ layer to squeeze out the water. It sounds just about credible that this should work but less so that it should be sustainable indefinitely. It seems that the tube should simply fill up with EZ water and that the flow should then cease.
The whole phenomenon apparently has movement arising from stasis and structure arising from disorder. These phenomena seem out of place in classical explanations but in terms of vortex physics, movement is permanent and the default tendency is towards the creation of vortex structures and the concentration and ordering of energy rather than its dissipation and disordering.
Energy enters the system from outside and is guided towards doing something useful by the geometry of the experimental set up and the relative conductivity of the materials.
From chaos to order in active fluids – Morozov. Wu et al
In this paper, some biological substances including ATP were mixed with water and the resulting solution placed in small tubes and cylinders.
The fluid spontaneously organised itself into vortices which oriented themselves with respect to each other and then the whole thing started to move in a single unidirectional flow.
Self organisation into vortices and thence into a coherent flow is therefore possible.
Free electron flow in quantum domains
Emilio del Giudice has claimed that water in biological systems in in some sort of elevated quantum state which reduces the ionising potential and creates a plasma of free electrons.
What has happened is that he has detected tiny electric currents and assumed that they consist of moving electrons and that they must therefore have been stripped of water molecules. The voltage needed for such currents is very small and so a reduced ionisation potential is concluded.
See the page: What is electricity? for an argument that electric current is does not consist of flowing electrons at all but a stream of electrical eddy currents instead. These need no electrons and hence no ionisation and can form at arbitrarily small(?) ‘potentials’.
These are measured as ‘current’ or ‘voltage’ within measuring devices and seem to be the preferred way of energy transport in biological systems.
Biological superconductivity
Another phenomenon reported is that of electric currents whose magnitude is independent of the resistance and so presumably independent of the originating voltage.
This is interpreted as some kind of superconductivity but again is an obvious sign of field movement consisting of electrical scalar waves or ring vortices. If this really is electric current where the current is independent of the voltage then what causes the current?
Watch videos of ring vortices in water to understand that the speed of the ring is independent of the propagation around the vortex and that the energy within the ring is not necessarily reflected in the outward aspect of the formation. Frictional losses are related by the outward geometry but energy content is hidden from view.
The conundrum of the independence of current and resistance is now explained as the resistance is determined by the outward aspect but the current by the inner energy; these are not necessarily related to each other.
If an electrical version of this shape were to enter a measuring instrument then the shape would adapt to fit the conductive environment of the instrument and some energy would be released. Depending upon measuring techniques it may even appear that more energy has arrived than set off in the first place. This may possibly even be true if the vortex stream has managed to absorb exogenous energy along the way.
Frequency transduction
Papers on Quantum Biology will mention that intracellular water is capable of transforming photons from one frequency to another, typically two photons of infrared will be input to a quantum domain and one photon of ultra-violet (high energy) will come out.
No explanation exists in classical physics and so a new explanation from quantum physics is formulated. There really is no need for this sort of thing when the phenomenon lends itself to at least two simple explanations from vortex physics.
First: What is imagined to be photons is in all probability some sort of scalar wave and these do not have a fixed frequency. They can move at different speeds through different media and a change in speed will necessarily mean a change in frequency with no loss or gain of energy.
Second: Intracellular water is permeated by a vortex field and any vortex can absorb more energy in the form of photons or other vortices. This energy can be of an arbitrarily small amount, it need not be of a limited ‘quantum’ amount.
A vortex will then absorb energy and in doing so become bigger and more energetic up to a certain threshold whereupon destabilisation occurs and some of this energy will be discharged again in the form of a field vortex which is then measured as a photon.
The vortex is acting as a simple accumulator with a discharge threshold. However, if experiments are set up whereby an infrared photon is always input and if, because of the ambient conditions, the output is always an ultra-violet photon, then the whole setup looks like it is really just converting ‘frequency’.
If this phenomenon is common then it will look like a fixed quantum of energy is always output and if this fixed quantum output is used as an input to further experiments then the strong impression is created that all energy is quantised!
We do not know the precise conditions that lead to photon absorption but it seems likely that it has something to do with resonance, meaning that the frequency of the absorbed photon is equal to the frequency of the spinning vortex or at least one of its harmonics. This again leads to the impression that energy only exists in discrete quanta.
It may well be that a large collection of vortices will self-entrain into a single macro domain of a single resonant frequency. Expect complex rhythmic behaviour typical of such systems. This seems likely as such patterns are again described by quantum physicists and again attributed to quantum effects: Quantum Coherent Domains
If most photons measured are really electric vortices then these entities do in fact contain discrete amounts of energy and such energy will appear quantised but this effect is an emergent property and by no means ‘fundamental’. The idea that there is a minimum amount of energy is an illusion created by the limitations of the theory and the crudity of the measuring process.
An electric voltage was applied across some water droplets and the resulting discharge patterns were photographed. Light is emitted at the interface between the water and the air in shapes that are both characteristic and stable. These resemble the vortex structures emitted from the connections on high voltage power lines.
This is well known phenomenon (Wikipedia) and has some explanation in terms of classical physics.
If I understand correctly a voltage creates an electric field which causes charge separation, with charges rushing to the periphery of the water droplet. Here they build up until a critical density is reached whereupon a large potential will cause ionisation (charge separation) of the surrounding air. This ionisation is visible as a spark discharge and somehow self-organises via an emergent electromagnetic field to form the characteristic shapes we see.
Well this sort of discharge usually forms lightning type discharges i.e. Lichtenberg patterns.
Alternative hypothesis: A ‘voltage’ is really an area of electromagnetic ‘field turbulence’ which, when introduced to a medium will adapt accordingly and propagate as either electric or magnetic field vortices. An interface will accumulate energy (see above) before discharging as more (invisible) vortex filaments.
These vortices can ‘unwind’ to form the stable propagation that we call ‘light’ and this may be visible at relatively low voltages as ‘bio-photons’ or ‘hydro-luminescence’.
Increased voltage may result in more light or may in fact cause ionisation when a critical value is reached. In this case however, it is the field that is the cause of the ionisation and whose helical form determines the structure of the discharge. In the case of classical physics it is the less than subtle process of ionisation that somehow results in a stable vortex.
In the vortex hypothesis, the helical structure arises naturally from an electromagnetic field according to the laws of physics and precedes the visible effect.
A low voltage form of this is precisely as described in the section on Tubular Flow above and the vortices will transport matter away from the interface thereby creating an exclusion zone.
There is no such thing as static electricity as commonly imagined and even descriptions from mainstream science are self-contradictory. All electromagnetic fields are composed of ‘living’ filaments of spiral field vortices which propagate at the speed of light and contain their own ‘energy’.
‘Movement’, i.e. field movement is intrinsic to electromagnetic fields, the vortices want to go somewhere, meaning nothing is ever truly static and the field itself can act as an energy source.
Field propagation is at the speed of light as with photons, but the propagation speed of a field vortex will depend upon the pitch of the vortex or the exact characteristics of the ring structure.
A conventional static field is a conglomeration of moving vortices. However, this fine grained structure has been missed owing to the crude nature of the measuring instruments and the unquestioning acceptance of an over simplified and inconsistent theory.
Classical theory
The classical model of an electrostatic field is based upon the idea of a ‘charge’ (an electron) and an associated ‘force field’ which adopts a radial configuration (right) and obeys an inverse square law out to an infinite distance: Coulomb’s law
For most practical purposes this seems to work but consider what happens if a shield of lead (Pb) is applied to eliminate the field and then removed; the field disappears immediately and is then instantly renewed. Coulomb’s law should still hold but this means that the field should come into existence again all the way to infinity in no time at all!
Physicists know this and know that it is impossible but seem to think that when the shield is removed, what happens is that the field somehow repairs itself from the charge outwards, radiating to infinity at the speed of light, whereupon it knows to stop and stabilise in order to re-establish Coulomb’s Law.
So the field has ‘moved’ outwards (i.e. it is not static) and it has originated from a small charge which never seems to run out of ‘field substance’, never runs out of energy to renew an infinite field in an instant and maintain it indefinitely.
Similar concerns apply to what happens if an electron is moved. In this case, in order for Coulomb’s law to hold, the entire field all the way out to infinity must also move with it.
This is inconceivable to sane people and Newton had similar concerns about the nature of gravity. Any instantaneous action at a distance is in any case a contradiction of the principles of special relativity and so classical physics and relativity are at odds with each other. They cannot both be true at once and the absurdity of the standard description means that classical electrostatic theory at least is flawed and even inconsistent with Newton:
“This form of solutions need not obey Newton’s third law as is the case in the framework of special relativity (yet without violating relativistic-energy momentum conservation)” – Wikipedia
No charge!
Classical theory relies heavily upon the idea of ‘charge’ as being the source of electric fields, but charge as such does not exist and attachment to this concept has proved to have a stifling effect upon improving electrostatic theory.
How is charge measured? How do we know it exists? It has not been described directly but we ‘know’ it exists because we can measure the forces exerted by it and then use Coulomb’s law to calculate the amount of charge that must have created such force.
This is very obviously a circular argument: “Charge creates force so any observation of a force is proof of the existence of charge”. Clear bunk.
Vortex theory: the electron
According to the vortex theory of Konstantin Meyl, an electron is merely the ‘vortex radius’ of a spherical-toroidal shaped electromagnetic field. The vortex was created from an extended field of an arbitrarily large size which continues to morph, mutate and expand throughout the cosmos.
The field has energy of its own and is self-maintaining by itself but in practice will interact with the local field structure, whether this be within an atom or in the ‘void’ of space. Measurements of the field around a particle will imply a spherical structure and lead physicists to infer the existence of ‘charge’ because that is what their theory says.
Within this framework, the whole of ‘matter’ is described as field structures and the only ‘forces’ available are electromagnetic forces. Therefore, the only way to move an electron is by the application of a motivational field. Such field will interact with the field surrounding the electron and the effects will spiral inwards towards the vortex centre of the little ‘particle’ thereby causing movement of the vortex.
So here it is the deformation of the field that leads to the movement of the ‘charge’ and not the other way around.
In vortex physics, the field is the primal cause and the illusion of matter is a downstream effect. Classical physics tries to have all this inverted, with ‘matter’ or the ‘properties of matter’ (charge) as the origin of force . This just leads to confusion.
Charged objects
A single electron takes the form of a single spherical vortex structure but a charged object such as a balloon or a charged metal sphere is a different matter.
The top of a Van de Graaff generator is a conductive ‘sphere’ filled with electrical eddy currents. These are field vortices that are not stabilised into electrons or positrons and are free to mutate into different configurations as conditions allow.
Vortices move and propagate, they move to the surface of the metal via mutual repulsion and form a ‘layer’ owing to the difference in conductivity between the metal and the surrounding air. The fields act as accumulators and gather sufficient energy to propagate into the atmosphere, possibly taking on a slightly different configuration appropriate to the ambient conditions.
A radial field of electrical filaments emanates from the sphere and propagates outwards to infinity. Measuring devices will take an average over a relatively large area of this field and conclude a ‘potential’ that diminishes according to an inverse square law.
The sphere is distributing energy and so the field is diminishing accordingly. This is interpreted in the mainstream as ‘charge loss’ i.e. the loss of actual matter (electrons or ions) from the object! A pattern to look for in physics is the offhand dismissal of ‘losses’ and ‘noise’ as if these things need no explanation, as if the laws of physics do not apply here. By ignoring inconveniences, the impression is created of a consistent theoretical framework when nothing of the sort exists.
Coulomb’s law (vortex interpretation)
So measurement of field strength (electrical potential) is really an average of the effects of field vortices and this will approximate an inverse square law according to geometric considerations alone; the filaments spread out over a greater volume of space and this is sufficient to produce the law.
Now consider shielding with a lead cage and then removing it suddenly. The eddy currents propagate outwards at close to the speed of light depending upon helical pitch and the field is renewed in due course.
Theoretically the propagation is out to infinity, but it doesn’t ever stop as infinity is never reached and in any case the field in this case is emphatically not static but in a state of continuous radiation with continual concomitant ‘losses’ from the charged sphere.
If the sphere is moved suddenly, then field effects will propagate outwards similar to the way that waves and eddies spread from a stone tossed into a whirlpool. Field propagation itself is at the speed of light but emergent effects will move at different speeds according to their dynamic geometry. The field ‘travels’ it is never static.
Friction
If two substances are rubbed against each other, the atoms do not actually ‘touch’ each other as such an idea does not make sense in either classical or vortex physics. Do atoms ‘touch’?
Instead what happens is that the vortex radii of many electrons will come very close together, creating very strong field interference. The kinetic energy of rubbing is converted to vortex energy in the atomic structure and the associated ‘field drag’ is experienced as a resistance to movement, i.e. ‘friction’.
We now have an excess of vortex energy over baseline and eventual destabilisation will lead to several phenomena:
Transfer of vortex energy throughout the medium is known as heat diffusion
The reconfiguring of a vortex into a plain old photon in the infrared range
The dissipation of electric eddy currents away from the material
The third of these being what is called the triboelectric effect . A ‘static’ electric field has been created without the need to rip an atom apart by stripping electrons from the outer shell.
The Van de Graaff generator
We are now ready to tackle this complex subject. This is simply not understood by the mainstream even according to their own account.
In their version, positive ‘charge’ is created either by extracting protons from the centre of atoms or by stripping off electrons to leave a positive ion. Both these are ‘matter’ and they move obligingly around the circuit, are recreated at (2) by field induction and gather on the outside of the metal shell where they create the infinite field in an instant and then leak away to the surrounding air. The metal dome is continually losing material substances but never seems to shrink, run out of ions or develop an sort of film at the surface. Very suspicious.
Eventual spark discharge is via ionisation of the surrounding air in accordance with an enormous ‘voltage’; yet another breaking down of atomic structure.
The explanation from vortex physics is still not simple but really only involves a single process, which is to say the transformation of field vortices from one semi-stable structure to another depending upon the local conditions.
The rubber band is an insulator and so favours, (via the triboelectric effect) a vortex of magnetic potential over a vortex of electrical movement. Movement is intrinsic to field vortices which aids in self-organisation of self-similar structures.
The field structures from the band will form particularly around the top of the wheel where there is elastic deformation of the rubber. They will transmute into positive electrical vortices at the surface and propagate through the air to the graphite brushes.
More propagation is guided by the strong conductivity of the metal dome and eventually field propagation occurs from the dome to the outside atmosphere as described above. No flying matter is needed and no ‘field induction’.
Spark discharge may well be accompanied by ionisation but this may not necessarily be the cause. If the field is uniform then why is the discharge so localised and why does it often take on a vortex shape (right). Discharge is via field vortex and the centre of the vortex increases field intensity which leads to ionisation.
So it is the field vortex that precedes and therefore causes the ionisation and not the ionisation that somehow forms vortices as it breaks down the air molecules.
The huge voltages claimed therefore may not be real but may be local effects combined with measurement artefacts. In any case, a ‘voltage’ is the result of averaging over millions of smaller field phenomena. It may even be that vortex filaments are attracted towards the measurement instrument!
The Earth’s electric field
The Earth’s electric field is likewise not static nor uniform. Discharge from the ionosphere is in the form of field vortices and it is these that can affect the electromagnetic bio-field of organic life forms, having some beneficial effects in maintaining bio-rhythms and some detrimental effects in promoting disease. See: Influenza and weather
Wikipedia
Coulomb’s law is described by Wikipedia both as ’empirical’ and as ‘fundamental’ at the same time which does rather highlight the confusion over the whole idea.
Concluding remarks
The idea of a static field and the requirement that it must come from ‘charge’, that it is inextricably dependent upon ‘matter’ may be good enough for many practical purposes, but is not theoretically tenable and therefore unsuitable as a foundational concept in physics.
It may be framed as merely ’empirical’ but is invariably regarded as ‘fundamental’ and immutable in the absence of anything better. This attitude has proved quite crippling in terms of making any sort of advancement in a wide area of theoretical physics and has resulted in the workings of biological systems seeming utterly incomprehensible.
This has led to many serious researchers concluding the existence some sort of vitalistic force in living systems. They are not wrong. The élanvital is nothing more or less than the organised movement of field vortices as they impact upon biological tissue. The tissue guides the field movement and the field energy ‘enlivens’ the tissue.
The idea of a field that is static, uniform and dependent upon charge should be cast aside in favour of a field that is moving, vortex-like and independent of a material source.
The vortex theories of Konstantin Meyl are not just a speculative adjunct to contemporary science but a necessary replacement for many areas.
The rate of the Earth’s rotation varies over time and sometimes on a very short time frame. There seems to be some correspondence with both solar and lunar activity.
Hypothesis: Giant electromagnetic filaments exist between the Earth and sun and also freely floating in empty space. The movement of Earth, moon and sun combine to disturb these and cause variations in the activity of all three bodies causing all sorts of interesting correlations. If the edge of the Earth passes near a vortex then the rotation speed may be altered, the Earth may wobble on its axis and there may be disturbances of the Earth’s weather patterns.
The language in this paper is difficult but there are some hints that support the above hypothesis:
“If the time interval is very short, then a sudden change in direction of tangential forces may cause the appearance of a vortex. Often in communications of airplane paranormal disasters, a sudden vortex is mentioned. When a paranormal ship disaster happens, due to turbulence, very cold air is sometimes noticed”
Film footage of plane disappearances often show what appear to be field vortices following the plane although some claim they are photoshopped. Disappearances in the Bermuda triangle were associated with compass anomalies. The Bermuda triangle is at a latitude of exceptional field vortex activity as reflected by large measurable ground currents induced by atmospheric electricity.
“Searching through a list of disasters, believed to be paranormal, involving airplanes and ships, one can find some cases that happened when the Moon crosses the turning points. Could it be a mere coincidence?”
“It must be also noted that the Bermuda Triangle and the Pacific Devil Triangle are situated between 18˚ to 30˚ latitude, the correspondence of 18˚ to 30˚ celestial declination. May it be also a simple coincidence?”
“It must be admitted that the great influence of the Moon and Sun in the formation of ocean and land tides has always been readily accepted; but, naturally, there are also other dynamic perturbations submitted by our planet from these celestial bodies.“
Q: Are the tides on Earth solely the result of the moon’s gravitational field or are they also influenced by ‘space-energy’.
A: The energy exists in space and is harnessed to the moon meaning that its influence is statistically almost indistinguishable from that of the moon’s gravity.
There seems to be plenty of evidence to suggest that connecting with the Earth has beneficial effects on health and in particular is efficacious in resolving many inflammatory conditions. However, the exact mechanism by which this happens is unclear and claims of electrons flowing into the body are unsupported by either experiment or theory.
More likely is the idea that some sort of flow of ‘field vortices’ (scalar waves) is responsible for the effects we are seeing.
The standard explanation
Shown below, a 200 volt difference is claimed between head and feet and this is said to drive electrons up into the body through the feet to relieve some electron deficit in the body.
The 200 volts is the bottom end of a 200,000 – 500,000 volt difference stretching up into the ionosphere.
A cloud of electrons covers the Earth’s surface. It doesn’t dissipate into the atmosphere and doesn’t creep up our positively charged shoes and doesn’t drift to our positively charged bodies through the air. The electrons wait until we walk barefoot and then suddenly enter our bodies and somehow are guided to where they are needed.
Problems with this idea:
None of this has been demonstrated
Why don’t the electrons at the Earth’s surface repel each other and dissipate?
Are the ground based electrons the cause or the result of the field?
How do the electrons move against the field gradient to our head?
How do the electrons move to the interiors of our cells?
Where did the electrons come from?
Why am I still alive?
Measuring the field
Fair weather atmospheric electricity – R.G. Harrison
Field gradient at Reading Observatory was measured using a field mill and the results are shown in the chart below.
Field gradient (PG) measured at Reading observatory using a ‘field mill’.
The field gradient varies considerably and even becomes negative during rain. Does this mean that the entire field up to the ionosphere has inverted or is it just a local change?
If the field is really caused by a potential difference between the ionosphere and the ground then how do local variations occur?
What happened to all the electrons during the rain and how did they return afterwards?
Field mills
A home made field mill is pictured, consisting of two capacitor plates, the top one spinning around like a windmill to create an alternating current from an assumed ambient field. [video] The measurements are calibrated against some laboratory experiment and measurements from the mill are then interpreted as ‘field gradient’.
Concerns
The device only ever measures effects within a gap of a millimetre or so. Any field gradients outside that are calculated rather than measured. It is assumed that the field gradient will be more or less uniform over the range required and so the result for local measurements are simply multiplied up to the desired height.
Effects near to the ground are surely dominated by the presence of the electron cloud? They are not necessarily representative of what happens further up in the atmosphere or even towards the top of the body.
It seems unlikely anyone has explicitly measured the 200kV difference between 10,000 feet and the ground. I didn’t find evidence that a 200 volt difference has been measured between a person’s head and feet.
If only local effects have been measured then the possibility should be considered that only local effects are present.
We have two metal plates in close proximity moving at speed and driven by a motor! How much voltage is induced by the machine itself?
If a field mill is calibrated in a laboratory then there is an implicit assumption that its behaviour within laboratory conditions is somehow representative of what happens outside the laboratory and under the influence of different weather conditions.
Other instruments give similar results to a field mill but that still doesn’t mean much other than that possibly the same artefacts are being created by slightly different techniques.
The central problem of ‘charge physics’
Standard texts on physics will describe an electric ‘field’ as being created by charge separation, with ‘charge’ being synonymous with ‘electrons’.
To create an electric field between the Earth’s surface and the ionosphere then, we need to separate out a great deal of electrons and move them towards the Earth somehow.
The problem now is that the field thereby created has a gradient that promotes the movement of electrons upwards, away from the Earth, and since it is the electrons themselves that are the actual cause of the field we find that the field gradient is constantly diminishing and destined to disappear altogether.
The electron gradient is seemingly both the cause and the result of the field at the same time!
We therefore are looking for some method of somehow pushing the electrons back to the Earth’s surface against the field gradient caused by their own accumulated charge.
The most common explanation for this is that lightning strikes bring the electrons back down to ground level where they spread out and distribute themselves fairly evenly over the ground.
This is becoming a typical theme of classical physics whereby anomalous phenomena are ‘explained’ by some sudden violent outburst such as a Big Bang, electrical ‘discharge’ or atomic deterioration.
This approach is particularly unsatisfactory in this case as:
We are still assuming charge move against a field gradient – how?
Lightning can go upwards and even sideways as well as downwards
Electric currents themselves do not consist of moving electrons: What is electricity?
A better explanation is given for lightning by Konstantin Meyl here: Ozone Engine
Vortex physics
Field discharge in the atmosphere is not uniform but instead in the form of vortices. The image shows helical discharge of an electric field which has somehow become visible as condensed water molecules. Similar patterns arise from the ‘corona’ discharge of power lines.
A field mill then is a device for catching field vortices of whatever configuration. The vortices enter the metal plates and spread out to form smaller eddy currents within the conductive environment. The moving plates create further electric turbulence which will find the propitious environment of the output wires and thence to the measuring device.
‘Current’ forms out of the melee and some pointer device is coerced into moving. These phenomena as a whole are interpreted as ‘field gradient’.
What happens if a few field mills are placed vertically one above the other? If a uniform field is present then they should all measure the same gradient. If the theory of vortex discharge is true then the top mill will harvest most of the crop and the lower ones will register progressively less ‘voltage’ as less vortex energy is available.
If it is claimed that a field does exist but that the mill is somehow distorting the gradient then what guarantee is there that what is being measured is what was actually there in reality and not simply an artefact of whizzing capacitor plates?
‘Voltage’ itself is an averaged measurement of field activity: What is electricity?
Static electricity is really just a collection of field vortices: Static electricity
Scalar waves in biology
Scalar waves (field vortices) are the medium of choice for the transport of both energy and information within biological systems:
Grounding is said to be a cure for many inflammatory conditions and there are papers on curing even ‘covid’ which is arguably a result of the excessive energy or informational ‘noise’ on seasonal vortex discharge.
Most people do not require grounding to maintain good health.
Many diseases are the result of bad regulation: Bio-regulatory medicine and regulation is via a scalar wave bio-field.
Eclipses
The solar neutrino stream increases in density during an eclipse causing possible ill health and making grounding essential:
“In the case of an eclipse of the sun effects on the biology, like problems with the heart among affected, at least can’t be excluded. If the scalar wave density increases above the density which is normal, then this has a positive effect on the economy of energy, as long as the body is capable to regulate the amount taken up. If the regulatory process however should fail . then the risk of a self-inflammation exists. Also straw bales and other organic and inflammable materials could thus go up in flames.” – Konstantin Meyl (Scalar Waves)
How does grounding work?
Many inflammatory conditions then are the result of energy mis-management. Energy is stored in various forms in the body but scalar waves are one of the more versatile. Too much of this energy however leads to imbalance, fatigue and inflammation.
Grounding provides a direct connection to the Earth which dissipates the excess energy and enables healing, often resulting in instant cures. So grounding is not the result of any energetic or charged input but instead the release of accumulated energy.
No need for vast voltage differences or wading through electron puddles.
Legacy biology claims that aggressively reproducing bacteria are responsible for cell death in the lung tissue. The body tries to frantically repair the damage whilst the immune system is responsible for killing off the bacteria at the same time.
The New Biology paradigm is happier with the idea that it is the tissue that dies first and that the bacteria are not causal in the process but are merely opportunist scavengers that live off dead tissue.
But what causes the tissue necrosis in the first place and why is the lung tissue seemingly more susceptible to this type of disorder than other parts of the body? Why is pneumonia common in hospitals with supposedly strict hygiene protocols and why does it seem to be a progression of other respiratory conditions such as influenza. Why don’t the nurses ‘catch’ it?
First consider that the lung tissue needs a continuous supply of energy in order to maintain it. This is assumed to come from oxygen in the blood delivered via the capillary system. The job of the lungs though is to absorb oxygen from the lung cavities and deliver it to the rest of the body and this is achieved via a separate capillary system, the pulmonary capillaries.
The coexistence of two such systems is a complexity not seen in the rest of the body and I will guess that this restricts the number of maintenance capillaries somewhat thereby making the whole system a little delicate and meaning that any extra input of energy in this area would be most welcome.
Konstantin Meyl has stated that such an additional input exists in the form of electromagnetic field vortices which are transferred from fresh air through the lung tissue directly to the bloodstream. Air that has been breathed and had insufficient time to recover is depleted of vortices and depleted of energy.
Gerald Pollack has written a paper going a step further, claiming that there is no exchange of oxygen at all in the lungs and that all energy input is via electrical energy.
Hypothesis: This energy is not merely necessary as an input to the bloodstream, but is vital for the maintenance of local lung tissue. These vortices will be absorbed directly into the lining of the lungs and assist in maintaining healthy cells. Exercise will increase breathing and proportionately increase energy intake. The inhalation of stale air will reduce energy intake.
We can see now the possibility of necrosis prior to bacterial proliferation.
An already weakened patient is confined to bed and immediately suffers a decrease in energy input to the lung tissue and in due course the intake of stale air further reduces available vortex energy.
Nurses and carers do not succumb as they are walking around, breathing more air and not spending 24 hours a day inhaling ‘dead’ gases.
The disease seems to be a progression of a viral infection but it is a consequence of bad treatment instead.
Treatment
If the cause is a lack of energy in the air then we should expect that the treatment should consist of .. exposure to fresh air!
“Our systematic practice was to put all pneumonia patients during the day, for six hours, on the roof, in the open air, in all weather in which harsh high winds, rain and snow did not prohibit. Indeed, the patients were not always brought in for little sprinkling rains or trivial snowfalls, and many times were out when high snow banks formed a corral about the space in which the beds were grouped” – Northrup (1906)
“Gradually, after most careful precautions and constant watching, it became the firm conviction of all observers that such patients were decidedly benefitted thereby.”
A couple of videos from Rupert Sheldrake concerning the abilities of homing pigeons provide convincing evidence of our ignorance of this phenomenon. It isn’t just that nobody has any idea how it happens but that there doesn’t seem to be any chance at all that we could describe it in terms of any currently known scientific theories.
Points of interest:
Released pigeons typically fly straight home
A pigeon separated from the flock can still get home eventually
They can be blindfolded and put on a rotating table and still get home
They can navigate on a completely overcast day
Connection to the environs of the loft as opposed to the construct itself
A flock can, however, find its way to a moving loft on a ship at sea
Trans-generational communication of migratory patterns
All ‘reasonable’ mechanisms have been ruled out
There appears to be no explanation for these phenomena in terms of conventional science so we need to look further afield:
Theoretical constructs from vortex physics (Konstantin Meyl)
Evidence showing effects derived from the theory
Similar ‘patterns’ of geomagnetic awareness from Frank Brown
In videos such as the one below, Konstantin Meyl has demonstrated the transmission of power via Scalar Waves, also know as Tesla Waves, and theorised that this set up can be used also to transmit information.
The waves are electromagnetic in nature
Connection is one-to-one between the metal spheres
Once a connection is made there is no power loss
There is no inverse square attenuation of the signal
Such signals are unaffected by ‘matter’ and can tunnel through the Earth
The connection itself can absorb energy from solar neutrinos leading to more power arriving than was originally sent
So an obvious hypothesis then is that the pigeons are somehow communicating with each other at least via this system. A connection, once established, is robust and distance is not an issue as the field itself is self-maintaining via the absorption of external energy.
Pigeons will be able to communicate over the horizon easily enough. The signal does not bend around the Earth however but simply tunnels through it; any pigeons left at home will act as a beacon for the displaced flock.
The brain
Meyl has stated simply that “The brain is a scalar wave computer” and that the nerves are scalar wave conductors. The waves are magnetic in nature and travel in the insulating myelin sheath around the nerve, with an electrical component travelling down the conductive body of the nerve.
This electrical component is a pale reflection of the true nature of the signal but it is this ‘current’ that has been assumed to be the only relevance to the functionality of the nerve by modern science. The structure of the magnetic part is the actual carrier of the information.
No transduction of energy or information is therefore required for this kind of telepathy as the electromagnetic activity of the brain is transmitted unmodified through the air using the same medium as the brain itself.
The phantom leaf effect
A leaf placed between two layers of plastic will leave behind some sort of ‘imprint’ that can later be photographed under a strong magnetic field.
What has happened is that vortex energy from the living leaf has moved to the plastic sheet, which, being an insulator will favour the stabilisation of such energy into magnetic scalar waves. The electrical component has been minimised owing to the poor conductivity and a magnetic vortex system remains.
My suggestion is then, that something similar happens with pigeons, that a whole flock will leave some sort of trace upon their environment and it is with this imprint that a connection is maintained thereby enabling an accurate homing navigation.
Connection to ‘place’
Pigeons whose loft was moved whilst they were away, first returned to the original location of the loft and not the loft itself, which suggests that the connection was maintained, not with the dead material substance of the loft but with the living ‘field’ of the forest surroundings.
Other snippets, however, have lofts on the roofs of high rise flats or on a ship at sea. Different materials have different conductive properties and different structures of scalar waves may form. Since the connection itself is absorbing neutrino power, it is conceivable that the integrity of the transmission be maintained in such a fashion.
The nature of the connection
The connection is that of one electromagnetic field to another. The brain works via a set of nested toroidal vortex fields and directly absorbs similar energies from the environment.
In one video it is suggested that magnetic particles (i.e. ‘matter’) are required in order to detect the Earth’s field but this is not necessary; magnetic vortices will enter the field of the brain and have a direct effect on its operations. If there are any magnetic particles that are coerced into movement by magnetic forces then the only way that the body can detect such movement is via its effect on an electromagnetic field anyhow – so why did we need particles in the first place?
Field information is absorbed directly into the brain with little need for translation or interpretation.
So the whole of brain field itself is the antenna for the reception of electromagnetic field activity and no specific organ is needed for this function. How would it work anyhow? It would still need to have some means of collecting information and this will be an electromagnetic field complex.
There is no need to interpolate ‘matter’ in the middle of electromagnetic field interactions and in any case it is too crude a substance to play any part in conscious activity.
An extended consciousness?
The energy field of the physical brain is said to be measurable several feet away from the head and since this field is now almost synonymous with the ‘etheric’ brain itself, it maybe isn’t too fanciful to ask if this extension of the energy field might have some practical purpose.
The physics espoused by Konstantin Meyl allows for far more complex behaviour in electromagnetic fields than that of classical science. ‘Movement’ is intrinsic and the field structure has a tendency to form spiral structures. Energy and information are guided towards a vortex centre and the second law of thermodynamics is inverted. A concentration of energy takes place alongside the more familiar dissipative structures and all of this is highly propitious for the formation and maintenance of living systems.
Consider then that information external to a pigeon’s physical brain is caught in its brain vortex and will then spiral inwards towards the physical bird. We than have an antenna that is considerably larger than a tiny bird brain and the whole concept is starting to sound more likely.
A tadpole had its eyes taken out and grafted onto its hindquarters (mentioned in a paper by Michael Levin) and after recovering from the shock could navigate its surroundings quite happily. So it doesn’t seem to matter how the information gets into the body; it will be processed correctly nevertheless.
The bio-field of the heart is much larger than that of the brain so we can maybe think of this also as a receiver of scalar waves. Energy can radiate outwards at the same time as information spirals inwards; the whole of a pigeon can be considered as a scalar wave antenna.
Watch a single celled organism find its way around a microscope slide in order to chase down food. It has no sensory organs no brain and not even a nervous system but is still aware of what is going on and manages to coordinate its movements accordingly.
A hive mind?
If a whole pigeon is a sensory system and pigeons are in constant communication with each other via scalar waves then what happens when they all gather together?
Is it in any way possible that the flock as a whole now forms a collective bio-field? A ‘hive mind’? Such a thing would surely increase both the power and sensitivity of the field. Being spread out over a greater volume it would have the capacity to receive a much weaker signal simply by collecting more of it.
In one study the behaviour of a termite colony differed depending upon whether or not it was separated from another colony by an aluminium sheet, suggesting some electromagnetic connection between the two groups. See: Distant cellular interaction
What happens within a murmuration of starlings? Are they merely exercising their wings prior to migration or are they creating a semi-permanent hive mind in preparation for navigation? A coherent field is formed that connects all the birds and this not only acts as an antenna but also a collective memory and possibly even has its own independent computational capacity.
The idea that this sort of disembodied mind could even exist will cause some to recoil I know, but the actual mind is disembodied, in a sense, anyhow as it is really just an electromagnetic field whose machinations are decoupled from the physical structures of the brain.
Again, if anyone thinks that the idea of a ‘consciousness’ emerging from the mere proximity of bird brains should reflect that the mainstream concept of consciousness is just this: an emergent property of the proximity of cells! If electrified jelly can make decisions then so can a connected set of pigeon brains.
Pigeons don’t need murmurations as they all live in close proximity anyhow.
Classical physics
Note that the above speculations are not even possible with classical electromagnetism. Here electric fields are either static, meaning they have no movement and don’t go anywhere, or they are photons which means they must necessarily shoot off at the speed of light in a straight line.
Neither of these configurations suggests the possibility of a self stabilising complex of vortex fields that can retain information whilst renewing its energy from external sources.
Again, the classical concept of electric currents is that of moving charge (electrons), which relies upon the idea of a voltage to push the tiny particles around as they have no motive energy of themselves.
This idea is just not very useful in any area of biology. Better is to think of circuits comprised of ‘field movement’ forming closed loop and helical vortex structures according to the updated Maxwell-Heaviside equations of Konstantin Meyl.
Vortex energy
Where do migrating birds get all their energy from? It does sound incredible that sufficient energy is stored as fat in a small bird and so we should consider Meyl’s idea that they are breathing in electromagnetic vortices along with the usual oxygen supply and that this is being somehow being used in mechanical action to aid flight.
Gerald Pollack has written a paper giving credible arguments to suggest that breathing has not much to do with oxygen anyhow and that in fact there is no gaseous exchange in the lungs at all! Pollack is suggesting an input of electrical energy in the form of electrons. However replacing ‘electron’ with ‘field vortex’ makes for easier reading.
Questions: Can this vortex energy enter the body via any other means than the breath? Is it possible that the general discharge from the ionosphere could be gathered by the collective flock vortex? Could this help to maintain the field and could some of that energy enter the body of a migrating bird to help it in its flight?
Inheritance of migratory paths
Inheritance of acquired characteristics does exist and has been demonstrated in laboratory experiments.
In one example, rats were made to fear the smell of cherry blossom and their offspring inherited the fear. In another, a caterpillar was trained to crawl towards a red circle and the behaviour was inherited by the emergent moth. Behavioural pattern such as this have been transferred from one snail to another simply by injecting material from one animal to another (Michael Levin).
This all works because inheritance has nothing to do with DNA (See: The DNA delusion) but everything to do with the transference of a scalar wave complex from one generation to the next (See: Telegony and Evolution and Inheritance).
The rapid ‘evolution’ of bird migration paths is therefore no surprise from this point of view. Memories, intents and complete behavioural patterns are codified into scalar waves and these are precisely the format that is needed for inheritance, persistence and communication between individuals or groups of individuals.
These wave vortices are a biological Theory of Everything.
Navigation by scent
In one video, the idea of navigation via smell is mentioned but discarded because of the observed fact of pigeons homing with the wind behind them. Maybe, maybe not. Most people will assume that ‘scent’ consists of a chemical discharge but evidence and argument suggest otherwise: Scalar waves and nerves.
Scent is conveyed via scalar waves and is absorbed directly into the olfactory nerve conduit. The possibility now exists of a direct scalar wave connection between scent detector and target, with reduced attenuation, enhanced sensitivity and magnification via neutrino absorption.
Consider the abilities of certain moths to detect a mate several miles away. Can they really detect the direction at this distance by the sampling of molecules or is it rather the case that an essentially electric connection has been formed and that it is this that provides the necessary information? Is it just the physical antenna that are receiving the information or the whole of a bio-field?
Scents can easily leach through a plastic bag. Is this really caused by molecule leakage or by scalar waves tunnelling through an insulator?
The intensity of a small clearly varies with wind direction which does rather indicate that it is emanating from freely floating molecules. However, that does not preclude the possibility of an additional, semi-permanent connection with a fixed source of the scent vortices.
A global navigational map?
Stunning work from Frank Brown demonstrates the ability of various animals, shellfish, plants and bacteria to synchronise to cosmic rhythms.
Organisms seem to know the time of year, day and position within the lunar cycle. They are aware of latitude and seemingly respond to external pressure changes even when kept at constant pressure within a laboratory. Storm conditions are predicted two days in advance using precisely this ability.
Faraday cages reduce these abilities and so the effects are assumed to be electromagnetic in nature. It is quite credible then that a pigeon or a flock of pigeons know quite well what is going on in their locale and exactly how it relates to solar, lunar and weather conditions. This isn’t quite the same as having a static map though and it isn’t obvious that navigation is possible from local information alone.
The point here though is that the Earth’s magnetic field is not just something that points North or South but has local geographic and temporal refinements that carry a large amount of information that has functional interpretations by every organism on the planet.
Scientific instruments are just fancy compasses and do not possess the refinements necessary to interpret such field information. Theories of electromagnetism inevitably reflect the crudity of the measurements that support them and are therefore themselves necessarily oversimplified. The result then is a science that effectively rules out half of the things it is trying to explain!
Connection to what?
Birds whose loft has been moved will initially return to the precise spot where the loft used to be – so what is it about this spot that is so special?
On the other hand, birds released from a ship at sea will return to the current position of the ship – so, again, where is the source of the connection?
The phantom leaf experiment showed a precise imprint of a leaf in some polymer sheets that persisted for only a few seconds, which doesn’t sound like a very good candidate.
Several considerations may be pertinent:
Plastic polymers are electrical insulators which would therefore encourage the formation of scalar waves where the electrical component in minimised and hence the magnetic activity maximised. These are described by Meyl as magnetic potential vortices and are of great biological significance.
Electrical conductors such as the steel of a ship will form electrical eddy currents by a similar mechanism to the above.
Biological systems prefer the magnetic versions of the waves for both internal regulation and the conjectured extra sensory communication. Internal vortices are friction free (no energy loss) and will in any case absorb energy from heat and other sources within the body.
Once a connection has been established, the connection itself will absorb energy along its length from solar neutrinos to maintain itself and will therefore grow proportionally stronger as the endpoints become further apart. (Sheldrake’s elastic band analogy is accurate in this respect).
So it seems likely then that a connection is made with some biological activity in or around the loft. In the case of pigeons released from a rural location this might be the grass and trees of a forest and in the case of birds accustomed to living in a ship or at the top of a block of flats it is the crew of the ship or the pigeon handlers themselves that are utilised as a useful anchor.
The answer to the question of whether atoms (or even objects) actually touch each other is dependent upon what framework is being considered, whether it be classical, quantum or particle physics. The only consistent framework is one that regards the entire universe as modulations of a single continuum, a ‘vector field’.
Classical physics
In classical (Newtonian) physics, space is well behaved and objects occupy well defined volumes, with the gaps in between filled with either ‘gas’ or ‘vacuum’. Things are made of different substances, e.g. glass, metal ,air, water etc. No object can be in two places at once and we cannot have two different solid objects occupying the same position in space. There is no real sense of the objects being made of ‘atoms’ nor what those atoms might consist of.
So imagine a metal ball falling onto a glass table, for example. The table top is at height zero and the ball is at some height, a metre, say.
Now at what point does the ball bounce? What is the height above the table at which the downward speed is zero and it starts to move upwards again? At what point do they ‘touch’?
Does the lowest point of the ball ever reach a height of zero? If the answer is ‘yes’ then the point where the ball touches the table is occupied by both metal and glass at the same time. This is a contradiction of the whole idea of solid, separately defined objects and it is not relevant that it is only one ‘point’ that these objects have in common. The Laws of Physics must apply everywhere or why are we bothering?
So the ball must come to rest at some finite distance above the table and this is the point at which the velocity reverses direction. The two objects never touch; they cannot. A finite distance is maintained between the two materials at all times.
How then does the ball reverse direction?
Energy transfer
Kinetic energy is transferred from ball to table and then transported back to the ball to make it go in the opposite direction. This seems fine but it means that the energy is transferred without contact between the objects, meaning that it must, at some non-zero height, leave the ball and enter the table; it must pass through a finite amount of air (or vacuum) all by itself.
A force field?
The ball is moving in one direction and then turns around before touching the table top and starts to go in the other direction. This is enough to deduce some force of repulsion even without knowing how it should arise. Some kinetic influence is maybe emanating from the table top that repels other objects before they reach it, before they make contact.
This surely implies the spooky action at a distance that both Einstein and Newton disliked.
Classical mechanics is busted?
The idea of classical mechanics arose from an attempt to formulate simple everyday observations in terms of fairly basic mathematical formulae, but as the above shows, we can’t describe ‘bouncing’ or even the idea of ‘contact’ in even a half-sensible manner!
Either classical physics is wrong, the mathematical formulation is wrong, or maybe it is simply not possible to describe reality in terms of familiar mathematics. We don’t even need to try anything ‘fancy’, we get into trouble simply attempting to define the boundaries of everyday objects.
Mathematics
The problem has arisen from defining objects as closed subsets of the continuum, that is to say, as spaces that incorporate their own boundaries. We can try defining objects as ‘open intervals’ whose boundaries are not part of the objects themselves, but this doesn’t really help.
We still can’t have two objects ‘touching’ as they will now always be separated by a single point at least and this point will never be part of either object nor can it be part of any other object. We have ended up with a universe containing an infinite number of empty points which physically separate the objects within it.
Maybe we can do some calculations with this model but it is highly unsatisfactory as a description of the nature of reality. I would contend that this is on a par with quantum physics for boggling the mind.
Field physics
The observation or maybe ‘deduction’ that the ball is repelled before it even reaches the table gives a clue to a better formulation of space and matter.
Even without doing any clever physics we can say that the objects are not separate in space but are part of, or embedded within, some omnipresent force field that controls the movements of even the largest objects and ensures that they conform to some universal organisational principle.
The idea that this force should ’emanate’ from the objects themselves and should affect other objects at a distance is, as Newton himself put it: “so great an absurdity that, I believe, no man who has in philosophic matters a competent faculty of thinking could ever fall into it.” – Newton 1692 – Wikipedia
Vortex physics
The vortex physics of Konstantin Meyl formulates single atoms as spherical vortices in an electromagnetic field. Negatively charged field structures will repel each other, with the force of repulsion increasing with proximity. This force reaches a maximum at the vortex ‘radius’ (shown) and effectively gives the impression of solid matter.
It isn’t quite clear what is meant by ‘touching’ in this respect. Two vortices will repel each other so much that it is unlikely their radii will overlap but if they do then that is fine as the respective fields will simply add together; we do not have separate substances as with classical physics and do not have discrete ‘solid’ particles to worry about as with particle physics.
The whole of the universe is simply a shifting field continuum which is given form by the field structure itself, with the behaviour of vortices giving the impression of solid objects making contact, exchanging energy and bouncing off each other.
To ask if two vortices make contact is to ask whether two eddies in a river will ‘touch’ each other. They can come close and bounce off each other but the idea of touching is somewhat nebulous as neither has a well defined boundary at all and the vortex influence associated with each will conceptually extend to infinity.
A self-consistent physics
Classical physics arose from the attempt to describe everyday observations such as ‘bouncing’ in terms of mathematical formulae but as we see, these attempts have resulted in a lack of consistency in the theory and a mathematical model that makes no sense with respect to reality.
The field physics formulation is counter-intuitive to start with but is self-consistent and in accordance with reality.
Try thinking of this in terms of particle physics or quantum waveforms and the explanations become positively surreal. What does it mean to even ask if two quantum particles are touching each other? Presumably they are and they aren’t, both at the same time!
Empiricism vs aestheticism
Part of the impetus for this post was the question of ‘Do atoms touch?’ and part was a discussion on the question of whether theories should be ‘beautiful’ or whether empirical observations should rule the day. There seemed to be something of a consensus that experiment should overrule theory and that beauty is a mere bonus and then only if you are lucky.
This is all wrong and has resulted in physics that resembles a patchwork quilt, which although agreeing with a large body of experimental data, nevertheless has no proper foundations, a tenuous relationship to reality, multiple of contradictions and really consists of a collection of isolated and inconsistent theories held together with empirically determined ‘adapter’ constants to hold the whole thing together.
The idea that observation and deduction alone are sufficient to formulate a theory is incorrect. Observations are always made with respect to the framework and are interpreted within it so for example the idea that the universe is full of discrete ‘objects’ is already a sort of theory deriving from intuition and observation – but it is wrong!
Any assertion that discrete objects exists needs proving somehow. There needs to be some testable framework that describes these objects and classical physics has failed to provide this at the outset. Any observation of an ‘object’ is now misleading and any science that uses such an ill defined concept is eventually doomed.
Similar considerations apply to modern physics. Observations (data) are interpreted with respect to the model, which itself is held to be correct and can never really be disproved. All that happens is that some ‘fix’ is put in and we end up with something like quantum entanglement and information coming backwards in time from the future.
There is more to a theory than mere aesthetics. It should be self-consistent within itself and consistent with its own predictions, and if it isn’t then it is just wrong and needs to be discarded. In addition to this, it must have some degree of ‘reasonableness’, some relationship to some presumed nature of reality which sounds ‘viable’ at least.
A theory such as quantum mechanics which allows multiple and outlandish interpretations regardless of whether or not they make any sense, surely has no place in scientific discourse.
Newton’s statement deserves reiteration: “That one body may act upon another at a distance through a vacuum without the mediation of anything else, by and through which their action and force may be conveyed from one another, is to me so great an absurdity that, I believe, no man who has in philosophic matters a competent faculty of thinking could ever fall into it.”
This is not theory and not observation and so not really ‘science’ – but it has proved to be correct!
So do they touch?
No. Quite surprisingly, the whole concept of ‘contact’ is not supported by any theoretical framework; it isn’t even possible to define it let alone test for it.
“These are the lies you were taught about electricity: That the electrons themselves have potential energy, that they are pushed or pulled through a continuous conducting loop and that they dissipate their energy in the device. My claim is that all of this is false.” – Derek Muller (Veritasium)
The quotation above is taken from the video below from the Veritasium YouTube channel. This, and the follow up video are proof that electricity does not consist of a flow of electrons. Moreover, they give some clues at least as to what is actually happening. Ideas from the vortex physics of Konstantin Meyl will complete the picture to give a credible explanation for the phenomenon.
At 2:12 we have: “There is no continuous conducting wire that runs all the way from a power station to your house. No, there are physical gaps, there are breaks in the line such as in transformers [..] Electrons cannot possibly flow from one to the other.“
Furthermore, with respect to alternating current: “If the electron flow is in two directions then why does the energy only flow in one direction?“
At 8:20: “People seem to think that you are pumping electrons and that you are ‘buying electrons‘ (from a power company) which is just so wrong. [..] It’s quite counter-intuitive to think that the energy is flowing in the space around the conductor, but the energy which is travelling through the field is going quite fast” – Dr. Bruce Hunt
“It’s the fields and not the electrons that carry the energy” – Muller
11:07 – Under-sea cables that were coated in an insulator and then encased in an iron sheath (for protection) did not perform well.
A transformer (right) consists of two coils of wire separated by a gap. Electricity (whatever it may be) flows through the coil on the left and radiates a field shown as two straight(!) lines which then induces a current by somehow interacting with the wire on the right.
Well the only thing a ‘field’ can interact with is another field. People will say that an electric field can set a charge in motion but a charge is only characterised by its own field and any interaction is totally dependent upon that field.
Assertion: Electricity is some sort of ‘field movement’ within the wire and surrounding insulator. This field extends beyond the wire in some form or other and is able to induce similar movements in the other half of the transformer.
Vortex physics
In the vortex physics of Konstantin Meyl, field movement is described by some slightly modified versions of the Maxwell-Heaviside equations and can adopt several interesting shapes. Helical fields (right) are common, as are ring vortices (smoke rings) and spherical vortices (electrons).
Electric and magnetic fields are inextricably entwined via ‘movement’ at right angles and ‘movement’ is innate to both types of field; ‘static’ fields are an illusion.
Electric fields will propagate easily within a conductor and insulating material will favour the movement of the magnetic component, leading to characteristic patterns of field movement.
The image below is from Viktor Schauberger and depicts the flow of water in a wooden pipe but will serve to illustrate the flow of an electromagnetic field in an insulated wire.
The wire is conductive and favours a helical flow of an electric field whereas the insulating cable favours the construction of magnetic ring vortices. The field vectors for the electric and magnetic fields are at right angles to each other as required.
Similar patterns are observed in the flow of blood (a partially ionised fluid) in the arteries, in vast ‘plasma’ clouds in space (Thunderbolts project) and in weather patterns in our atmosphere (Birkeland currents). These currents are self organising along the lines of a least energy principle and highly efficient, losing very little energy.
So what is electric current?
Forget about electrons for minute and imagine all manner of field turbulence within a battery as various chemicals react. All this activity amounts to a sort of electromagnetic ‘pressure’ the energy wants to go somewhere.
Now attach an insulated wire and an inviting conduit has appeared. The turbulence enters the wire and begins to self-organise according to local conditions. A helical electrical component forms down the conductor and a magnetic ring vortex proceeds down the insulating sheath.
When these formations get to the light bulb, the ambient conditions have changed and are less favourable for the maintenance of the structures that have been hitherto so stable. The lack of a proper insulator and a less conductive filament cause the whole structure to break up and reformat as an altered field geometry, releasing, in the process field structures that are interpreted as ‘energy’.
Some structures are transmuted directly to photons and emitted with a characteristic spectrum whilst others are caught up in existing field vortices and will manifest as ‘heat’ (vortex gains and losses) . Some of the energy in ‘hot’ vortices will reconfigure as infra-red photons and fly away at the speed of light.
Note that within this formulation, there is no transmutation from electron to charge to force to matter and back to energy as all of energy, charge and matter are really the same thing, namely field activity.
The video comments explained
Transformers
There is no need for electrons here. Field movement travels along a wire and its surroundings. Vortices are discharged from the wire and travel towards the receptor coils. They enter these structures and begin to self-organise in a way that is encouraged by the geometry of the coil.
Movement is intrinsic to electromagnetic fields and somehow a ‘current’ is formed.
The structure of the field in between the transformer coils is almost irrelevant as the flow will reformat within the wires anyhow. This ‘must’ happen as the laws of physics must be obeyed and the current must flow according to local conditions.
Think of pumping water into a hose pipe and waving it around. Whatever the state of the water when it was outside the hose and whatever the nature of the waving, the water will form its own flow profile and can really only go one way or the other along the pipe.
Mainstream physics will talk about the field ‘inducing’ a current in the receptor coil but here the field literally flows from one wire to another. Like water.
How do fields carry energy?
They are not static but literally flow from one place to another.
Watch videos of ring vortices in water to see that they can clearly carry a lot of energy. Similarly a magnetic ring vortex will contain a great deal of electrical energy and this will be made available for use at the other end of the wire somehow.
So it is very likely true that the bulk of the energy is carried in the insulating cable as speculated in the video.
The imagining that fields are either static or vibrating entities does make it hard to consider that energy is transmitted and that it is transmitted in one direction only but the image of a moving ring vortex is surely compelling.
Electrons cannot possibly flow..
No, but field vortices can as they are not tied to ‘matter’.
Undersea cables
The magnetic ring vortices are an integral part of the flow geometry and they perform better in an insulator.
A big current needs a big cable and proportionally sized vortices are required which means a proportionally larger insulator sheath. It is no good just spraying a coating on the cables. The insulation isn’t to stop the electrons falling out but to allow an enclosing vortex structure to form which lends stability and efficiency to the whole flow.
A thin insulator does not allow the rings to form properly; the signal is distorted and the flow starts to break up and dissipate into the salt water.
“If the electron flow is in two directions (alternating current) then why does the energy only flow in one direction?“
Conjecture: With alternating current, the ring vortices are of alternating polarity (direction of spin) but still travel in the same direction. The electrical field vector within the conductor alternates between the forward and backward direction but this is not where most of the energy is held.
The ‘energy’ is contained within the vortex and not in it’s speed of travel or the direction of spin. For most purposes, the transmutation from vortex to energy is a crude breakdown of structure and is agnostic of the spin direction.
Why is alternating current more efficient than direct current?
It is claimed that this is because it is transmitted at a higher voltage and that this voltage is created via transformers.
Guessing now: The transformer somehow translates between a high rate of low energy vortices to a lower rate of high energy vortices. Energy transmission is related to energy content of vortices whilst energy loss is proportional to the number of vortices. Energy loss is via ‘surface loss’ from the rings.
The idea of ‘voltage’ is of limited use here.
What is discharge?
Several mentions of discharge (of electrons) or charge loss are made along with suggestions of field induction (the capacity to move electrons); these are a standard part of the vocabulary of physics and electronics and all no doubt have different laws to help quantify their behaviour.
Within the framework of vortex physics, however, these are all the same phenomenon, that is to say, the movement of field vortices:
Discharge: Field Vortices going where you don’t want them to.
Charge loss: Electrons cannot disappear completely but vortex structures can
Induction: The change in geometry of a field structure caused by a different conductive environment
None of this has anything to do with electrons.
The second video
The diagram shows part of the experimental setup. A battery (capacitor) is placed in a circuit with a light bulb and the connecting wires stretch out to a distance of one light-second (actually much smaller) so that the ‘electricity’ is assumed to take one second to complete the circuit. See here: How electricity actually works
When the experiment is performed and the current switched on however the light comes on almost immediately and at least much sooner than it takes for light to travel around the circuit.
How does this happen? The answer is that as field currents start to flow around the circuit they will discharge into the air around the wire and form a de facto field which expands radially at close to the speed of light and eventually impacts upon the supply wire to the light bulb.
These vortices enter the wire and start to flow according to local conditions thereby creating a ‘current’, that is to say a structured flow of field energy.
Several commentators remark that this current should be infinitesimal, however, it turns out in practice to be strong enough to illuminate the light bulb.
What is going on?
Envisaged by classical physics is an electric field such as illustrated here, possibly coupled with an accompanying magnetic field that similarly decreases in field strength in proportion to the distance from the wire. The impact on the receiver wire will be small.
As soon as this field impacts the wire however, current flows in the wire and produces its own field which starts to interact with the first (transmitter) . Ignore the plus and minus signs here, the point is that the two fields are interacting over a region that is much larger than just the second wire itself.
Consider then this possibility: The electric and magnetic fields together form a helical vortex structure around the wires with the transmitter forming an outward spiralling vortex and the receiver hosting an inward spiral.
Energy then flows from one vortex to the other, the inverse square law is not appropriate and sufficient energy flows to light the bulb.
The vortex from the transmitter expands at close to the speed of light and impacts the second conduit. The current is small first but it creates its own vortex which expands at a similar speed, harvesting more and more energy as it does so.
At first, a doubled radius of the second vortex means a rough doubling of the energy gathered and hence a doubling of the current formed.
A unified field forms with a simplified form shown right and the rate of energy transport from one wire to another is .. anybody’s guess! It is likely that ultimately the ‘induced’ current drops off in approximate proportion to the distance purely on geometrical grounds.
This isn’t a ‘law’ though but a general principle as what is measured is always some sort of average which has been interpreted via a measuring instrument.
Note the contra-rotation of the helical fields and imagine what this would look like when extended over the whole circuit. The rotation is always the same way with respect to the current. An extended ring vortex is formed around the whole circuit and this is already known to be a highly stable structure.
Ok, now consider the screenshot from the second video:
The green line shows the current in the transmitter wire and the yellow shows the current in the receiver. The green arrow points to the time that the switch was turned on; the transmitter current shoots up almost instantaneously.
The current in the receiver though shows a linear increase up to the point of the yellow arrow which represents the current that initially drives the light bulb. Thereafter there is a sharp increase as the current completes the entire circuit.
The vortex model can be said to predict the linear increase but the classical model cannot. What would be expected by established theory is a sudden but ‘infinitesimal’ current which would then remain stable at a very low point.
Conduction within nerves
If conduction within wires is largely by means of ring vortices then maybe the same is for the transmission of nerve impulses?
Many papers find that there is a relationship between the speed of propagation of nerve impulses and the thickness of the insulating sheath surrounding a nerve; the thicker the myelin sheath, the faster the signal propagation:
Local modulation of neurofilament phosphorylation, axonal caliber, and slow axonal transport by myelinating Schwann cells – de Waegh, Brady https://pubmed.ncbi.nlm.nih.gov/1371237/
As with the undersea cables, a thicker sheath allows the free and unconstrained development of ring vortices whilst a thin sheath necessitates a deformation or stretching of the vortex to fit within the sheath, Additional surface area means additional ‘field drag’ (also known as friction) which leads to energy loss and slower propagation.
Summary
Electricity is the continuous flow and transmutation of energy fields from one environment to another. The geometry and conductive properties of that environment in conjunction with the principles of vortex physics characterise the flow.
Within this landscape, various patterns emerge and the simplest of these such as voltage, current and field ‘induction’ have been adopted as standard but none of them have a particularly sound basis in reality, being largely artefacts of the measuring instruments themselves.
Other patterns such as ring or helical vortices on the other hand are theoretical constructs that have not been measured and yet give a greater and more consistent understanding of the phenomena that we actually observe.
Local modulation of neurofilament phosphorylation, axonal caliber, and slow axonal transport by myelinating Schwann cells – de Waegh, Brady https://pubmed.ncbi.nlm.nih.gov/1371237/