The idea that the sequence of base pairs in DNA represents any sort of blueprint for life is nonsensical from an informational theoretical point of view. The sequences are meaningless by themselves and require systems of translation and transcription which themselves require the existence and maintenance of unfeasibly large amounts of information.
This refutation applies to all systems where ‘information’ is regarded as an abstract entity and divorced from any physical function.
The idea of DNA as ‘information‘
Mainstream science tells us that the ordering of the base pairs in a strand of DNA represents some sort of blue-print for living systems. Depending upon who you read it can represent an entire organism or just the structure of proteins in the body. Either way, the idea is unfeasible.
The base pairs of DNA constitute data, not information, and without interpretation they are really meaningless strings of digits. There is no obvious code for a protein written into a DNA strand and no reference to any laws of physics or biology; all we have so far is a stream of ‘bits’.
The storage of data as a stream of DNA base pairs may be appropriate for stable storage and integrity during reproduction but it will not, of itself, lead to the development of a new organism.
To convert this stream of bits to anything resembling organic life, we therefore need to translate the bits from this coding scheme to one that is more representative of the laws of bio-chemistry and then somehow implement these physical instructions to construct a real entity. Scientists know this and refer to these steps as translation and transcription respectively.
Translation
DNA has about 3 gigabytes of stored data but we need to be able to interpret this data and translate it to a series of protein coding schemes or something similar. The question arises then as to how much data is needed for the translation scheme itself.
An analogy is that I want to send a Shakespeare sonnet to someone in China but they don’t speak English so an English-Chinese dictionary needs to be involved. A sonnet contains a mere 24 lines of text but the dictionary needs to contain every single word in the English language, just in case it is present in the sonnet.
The dictionary in this case then must contain vastly more data than the information to be translated.
What size of dictionary is required to translate all the potential data in a genome? ‘Unfeasibly large’ appears to be the answer.
Maintaining integrity of the data
The volume of data isn’t the only problem; we have to ensure that it is stored somewhere, free from corruption and somehow inherited. We need to specify some medium in which this data is embodied.
If we say that the integrity is maintained by error correction then we now need extra data and extra functions to implement the error correction and these themselves must be error free.
The mechanisms for error correction, translation and transcription need to be precisely inherited themselves and again require the presence of extra information.
The embodiment of biological data as a digital system has not solved any problem at all but instead added extra problems to solve with now exponentially larger quantities of data. The whole scheme actually necessitates an infinite regression of encoding and error correction.
Transcription
In addition to a dictionary for translation, we need some mechanism for transcription. The translated information coming from the DNA needs to be input into some physical process which will go on to construct proteins or whatever. So what does this process consist of, how was it constructed, where is the information for this and how was such information inherited? The information cannot be contained in the DNA itself because it was needed to construct the machinery that extracts information from the DNA in the first place.
We have managed to describe another infinite chain of regression, this time for the transcription process.
A generalisation of the problem
The problems above are described with reference to DNA but clearly apply to any digital encoding scheme within biological systems.
The central problem is that digital data is just a string of bits and at some time this will need to be converted to a real entity via the laws of physics. There are no laws of physics in a stream of bits, no feedback systems and no energy to drive the process along. All these must come from somewhere else.
The whole narrative draws attention away from the practical problem of manufacturing a cell and just points to the ordering of base pairs as somehow a great discovery.
The same problem will arise whenever a data stream is regarded as source of ‘information’ and whenever the idea of ‘information’ is regarded as an abstract mathematical entity with no concrete relationship to the laws of physics or bio-chemistry.
The solution in abstract
The solution then is to stop regarding ‘information’ and physical structure as separate entities and acknowledge that within biological systems at least, biological information must consist of ‘functionality’, i.e. it must consist of some concrete physical entity that is capable of getting things done.
Information must be in some sense ‘absolute’ and related to the laws of physics in order to remove the need for both translation and transcription. Biological information cannot therefore be digital or ‘abstract’ in nature.
A concrete solution
Konstantin Meyl, in his book “Scalar waves..”, has stated simply that: “(biological) Information is the structure of a scalar wave.”
A scalar wave in this case is an electromagnetic structure as described by Tesla which is likely found throughout biological systems. See: The nature of the bio-field
This proposal fits all of the requirements for biological information.
Such structures are inherently self-stabilising
They have their own motivational force
Will propagate along appropriate biological conduits
Have their own intrinsic energy
Additional energy may be absorbed from the environment
Energy transduction enables ‘persistence’
Energy transduction enables ‘function’.
Specific characteristics enable specific function
Electromagnetic nature enables direct interaction with the bio-field
Obviates the need for translation and transcription
These are the requirements that we need in abstract. There may be other physical constructs which implement these features, but scalar waves seem a very good fit.
An attempt is made to understand Einstein’s theories of relativity, particularly with respect to the central idea of an inertial frame of reference. Available descriptions are confusing and contradictory with definitions of the basic concepts either ambiguous or absent. Einstein himself voiced similar concerns. Some of Einstein’s fundamental errors are pointed out and alternative ideas proposed. The experimental results that are claimed to be explained by the theory of relativity are insufficient to prove the theory of relativity and in many cases, alternative explanations are available.
Inertial frames of reference
The idea of an inertial reference frame is key to Einstein’s theories of relativity, both ‘special’ and ‘general’. It follows that:
If we can’t understand inertial frames of reference then we can’t understand relativity
If a text doesn’t explain inertial frames properly then it hasn’t explained relativity
If reference frames have no consistent, unambiguous definition then relativity is likewise undefined
We take Wikipedia as a respected source of information on this and try to understand the main ideas.
What is a ‘frame of reference’?
In physics and astronomy, a frame of reference (or reference frame) is an abstract coordinate system, whose origin, orientation, and scale have been specified in physical space. It is based on a set of reference points, defined as geometric points whose position is identified both mathematically (with numerical coordinate values) and physically (signalled by conventional markers). – Wikipedia
So a frame of reference is just a coordinate system and as such we can use it to define such a thing as ‘position’. If we now integrate the concept of ‘time’ somehow, we can define the change of position over time and call it ‘movement’ or ‘velocity’.
‘Velocity’ is the rate of change in position respect to a specified coordinate system and agreed time metric.
Likewise we can define the concept of ‘acceleration’ as the rate of change in velocity with respect to a specific coordinate system and agreed time metric.
Coordinate systems (frames of reference) are described as frameworks for the specification of position, velocity and acceleration and that is all.
Conversely, if we are to describe such things as position, velocity and acceleration, then the framework with respect to which they are defined is deserving of the term ‘frame of reference’.
To reiterate: All position and movement is defined with respect to frame of reference (coordinate system).
A first inconsistency?
In the same paragraph, Wikipedia goes on to say:
An important special case is that of an inertial reference frame, a stationary or uniformly moving frame. – Wikipedia
Ouch!
What is a ‘stationary or uniformly moving frame’? Such uniform movement (or otherwise) is only defined with respect to some coordinate system (reference frame), but which one?
We are talking here about the movement of a reference frame itself, not objects within it. Such a movement is nevertheless ‘movement’ and hence must be measured in some coordinate system in order to have any meaning at all. The moving framework cannot be described with reference to itself (it would always be stationary!) and so some other ‘higher’ or ‘universal'(?) framework is assumed here but not explicitly stated.
I would suggest that the reason such a framework is not discussed is because the eventual aim is to give justification to the idea, from Einstein, that no coordinate system is preferred over any other; everything is ‘relative’.
What is an inertial frame of reference?
The abstract idea of a frame of reference was introduced above, but Wikipedia has a whole separate entry now on the definition of a specifically ‘inertial’ frame of reference:
An inertial reference frame is a frame of reference in which Newton’s first law of motion holds true without any corrections. This means that an object either remains at rest or continues to move with constant velocity in a straight line unless an external force acts on it. In such a frame, there are no fictitious or pseudo forces required to explain the motion of objects. – Wikipedia
Compare with the first definition above, where an inertial reference frame is described as a “stationary or uniformly moving frame”.
The first definition is in terms of coordinates, of position, distance, velocity and acceleration (change of velocity over time) but the second is in terms of Newton’s laws of physical motion.
These two concepts are worlds apart and should never, ever, be assigned to the same terminology. There is no concept of ‘force’ within a coordinate system, nor of an ‘object’, ‘inertia’ or even ‘mass’; these are separate entities that need their own definitions.
Note that the first definition of an inertial frame contains no mention of the word ‘inertia’ – and so why refer to it as ‘inertial’? This tends to conflate the idea of inertia with that of acceleration. They are obviously different entities but later descriptions of relativity require that they be effectively the same thing, and so describing a stationary frame as ‘inertial’ makes it a practical certainty that such a conclusion should eventually be reached.
Again, from the same article in Wikipedia:
Inertial reference frames are either at rest or move with constant velocity relative to one another. – Wikipedia
What does this mean? Two possibilities:
This is a definition. Inertial frames are now defined as those that are at rest relative to one another
This is a theoretical consequence of the definition in terms of Newton’s first law.
In all likelihood, the second possibility is intended, but it needs some justification. The attempt here is to define the basis of special or general relativity and so accuracy is required.
What does sit mean to: “move with a constant velocity relative to one another“? Remember that velocity is always defined with respect to the elements of a coordinate system and so the relevant coordinate system here should be specified. We can guess here that each coordinate system is to be regarded as an element of the other but this has the consequence that each system ultimately contains a reference to itself!
If the only qualifications of an inertial system are those to do with relative velocity, then why are they described as ‘inertial’?
This is a perfect example of definition creep which seems ubiquitous in attempts to describe relativity. Descriptions start off talking about velocity and acceleration, i.e. events within a pure coordinate system, but soon turn to forces and inertia and after a while the reader becomes hypnotised into believing the central tenets of the theory with no real justification at all.
Special relativity
From the Wikipedia entry on special relativity:
In physics, the special theory of relativity, or special relativity for short, is a scientific theory of the relationship between space and time. In Albert Einstein’s 1905 paper, “On the Electrodynamics of Moving Bodies”, the theory is presented as being based on just two postulates:
The laws of physics are invariant (identical) in all inertial frames of reference (that is, frames of reference with no acceleration). This is known as the principle of relativity.
The speed of light in vacuum is the same for all observers, regardless of the motion of light source or observer. This is known as the principle of light constancy, or the principle of light speed invariance.
Read again: “The laws of physics are invariant (identical) in all inertial frames of reference (that is, frames of reference with no acceleration).“
So we are now describing inertial frames as those with no acceleration again. Fine, but acceleration with respect to what exactly? If this question cannot be answered then there is no acceptable definition of special relativity.
Note that this definition of inertial frames is both convenient and necessary here, as if we accept the alternative definition of a frame of reference where Newton’s law holds true then we have something like: “The laws of physics are invariant (identical) in all frames of reference where Newton’s first law holds“. This is not entirely vacuous but note that it cuts out the idea of acceleration altogether and if all we are concerned about is Newton’s law then we get: “Newton’s first law holds in all frames of reference where Newton’s first law holds“. This is vacuous now and nothing of any meaning has been said about Newton’s law, gravity or acceleration.
From the same Wikipedia article:
In relativity theory, ‘proper acceleration’ is the physical acceleration (i.e., measurable acceleration as by an accelerometer) experienced by an object. It is thus acceleration relative to a free-fall, or inertial, observer who is momentarily at rest relative to the object being measured.
And there you have it! The transformation is complete! We have moved seamlessly from a definition of acceleration that everybody understands to one that is convenient for the theory of relativity.
Accelerometers do not measure acceleration in the conventional sense of the word but instead record the displacement of a weight owing to either inertial or gravitational forces.
We started with ‘acceleration’ meaning a change of velocity within a specific coordinate system and ended up with a definition in terms of forces, inertia and gravitational attraction. We have now seemingly described inertial reference frames without the need of velocity or position, or in other words, without any of the qualities that identify a reference frame as a coordinate system.
A non-accelerating frame has become synonymous with a force-free frame simply by linguistic trickery.
Einstein’s concerns
This conflation of ‘inertial’, ‘non-accelerating’, ‘force free’ and ‘Newtonian’ has not gone unnoticed:
All frames of reference with zero acceleration are in a state of constant rectilinear motion (straight-line motion) with respect to one another. In such a frame, an object with zero net force acting on it, is perceived to move with a constant velocity, or, equivalently, Newton’s first law of motion holds. – Wikipedia
What is meant by ‘zero acceleration’ in the above?
If you do not have an absolute frame of reference then how can you ever say that something is moving with constant velocity (zero acceleration)? You clearly can’t and so they are trying to define constant velocity as relative to other frames that are also moving with constant velocity relative to each other. This is gibberish.
Einstein himself was aware of the problem:
The weakness of the principle of inertia lies in this, that it involves an argument in a circle: a mass moves without acceleration if it is sufficiently far from other bodies; we know that it is sufficiently far from other bodies only by the fact that it moves without acceleration.
— Albert Einstein: The Meaning of Relativity, p. 58
Zero acceleration is now defined, not with reference to a coordinate system but by the lack of gravitational attraction from other bodies.
Example: Two falling weights
Inertial reference frames are either at rest or move with constant velocity relative to one another. – Wikipedia
As an example consider two astronauts positioned a thousand miles above the Earth, a hundred miles apart and falling freely towards the plant’s surface.
A stationary observer at the surface will see these astronauts accelerating with respect to himself and also with respect to each other as they converge. Furthermore, the astronauts see themselves as accelerating towards each other and towards the Earth.
By the discussion above, we cannot have all of these as being stationary within inertial frames at the same time – so which ones are inertial and which ones are not? How do we tell?
Which of these bodies is moving ‘without acceleration‘? Physicists will no doubt say: “The freely falling bodies are in an inertial frame because they experience no force and Newton’s first law holds”, but the question was about acceleration and replying in terms of forces like this pretty much assumes the conclusion that Einstein was trying to reach.
Why is all this happening?
Newton’s theory of gravitation is based upon the ideas of mass, gravity, force, inertia and acceleration. However, whilst it is clear that there is some relationship between these quantities, it isn’t quite clear precisely what this is and nor is there any basic mechanism described for the phenomenon of either inertia or gravitational attraction.
Einstein has seen this and conceived the idea that inertia and gravity are one and the same thing but viewed according to different coordinate systems. The acceleration caused by gravity is now nothing more than the acceleration of a body perceived according to an accelerating frame of reference, nothing more and nothing less.
Einstein has thereby obviated the need to describe a mechanism for gravity by simply relabelling it as ‘acceleration’. He has declined to provide a physical mechanism for gravity and instead reframed it a, very simply, a change in position relative to something else! An observation (measurement) has been elevated to the status of a physical law.
The equivalence principle
A version of the equivalence principle consistent with special relativity was introduced by Albert Einstein in 1907, when he observed that identical physical laws are observed in two systems, one subject to a constant gravitational field causing acceleration and the other subject to constant acceleration, like a rocket far from any gravitational field. Since the physical laws are the same, Einstein assumed the gravitational field and the acceleration were “physically equivalent”. – Wikipedia
Einstein stated this hypothesis by saying he would:
“…assume the complete physical equivalence of a gravitational field and a corresponding acceleration of the reference system.”
— Einstein, 1907
This is obviously two big mistakes rolled into one short phrase.
First, Albert refers to an “acceleration of the reference system“, but again we can ask: “With respect to what?”
Second, the phrase “complete physical equivalence” is surely a massive overreach? The text above claims that Einstein: “observed that identical physical laws are observed in two systems.. like a rocket far from any gravitational field.”. Really? How did he observe this? A complete characterisation of the laws of physics is not available at the present and was not available in 1907. There is therefore no way of testing for complete physical equivalence. This is a meaningless phrase.
The available laws at the time were Newton’s laws of gravity and since these were proving to be inadequate, alternatives should have been considered. Instead what has happened is that Einstein has tried to ‘fix’ the paradoxes of Newton by the simple means of equating all acceleration with gravitational acceleration. By this means he can do without any explanation for a physical mechanism of gravity and just say that it is ‘acceleration of the reference system’.
We can say that no additional physics is being proposed here, merely the same Newtonian laws but described from different perspectives. Indeed, the proposed equivalence of acceleration and gravity actually stifles further enquiry into the topic as there is nothing further to research, with any further anomaly resulting in attempted explanations by manipulation of the reference system only.
Out of necessity now, Einstein will go on to explain the laws of physics purely in terms of outlandish frames of reference, resulting in the concept of 4-dimensional curved space time with shrinking lengths and clocks that run a different rates.
A model of the fundamental nature of space and time has arisen purely from considerations of gravity and acceleration, and much of that mere conjecture. It is no surprise then that the new theory says nothing about the forces of electromagnetism and is unlikely to do so for the foreseeable future.
The gravitational field
The conflation of a gravitational field with mere acceleration effectively rules out the investigation of any characteristic of a gravitational field that is not relevant to acceleration; the theoretical framework is simply not able to express such properties.
Gravitation is now synonymous with acceleration and has no other function than to move objects and no other measurable or theoretical properties other than those pertaining to with acceleration.
This is clear bunk. We have, in a gravitational field, several properties which are likely to have effects other than pure acceleration:
A diminishing of strength according to an inverse square law
A divergence of the ‘field lines’
A reduction of curvature of the isobars according to an inverse square law
Some fine grained structure arising from the atomic structure of the Earth
A directional accelerative propensity towards the Earth
An aligning effect on a ship’s gyroscopic compass
A mechanism for inertia
Some other global structure aside from a simple ‘sink’ (e.g. a vortex structure)
Something to explain the precession of the perihelion of Mercury
Some of these are already measurable and others may be measurable in the future or calculable from other measurables. To say that they don’t exist or aren’t relevant is positively deranged and for a theoretical framework which rules these out to survive for a whole century is just inexplicable.
Gravitational attraction is not just acceleration, there is a mechanism producing such an acceleration which needs explaining. Indeed, acceleration itself is not a mechanism but the resultant effect of such a mechanism, whatever that may be.
Example: elevator gravity
An example below from the Wikipedia entry on General relativity:
According to general relativity, objects in a gravitational field behave similarly to objects within an accelerating enclosure. For example, an observer will see a ball fall the same way in a rocket (left) as it does on Earth (right), provided that the acceleration of the rocket is equal to 9.8 m/s2 (the acceleration due to gravity on the surface of the Earth). – Wikipedia
So now objects in a gravitational field only behave similarly to objects within an accelerating enclosure, whereas before, the laws of physics were identical.
What is an accelerating enclosure accelerating relative to? If the rocket is at the surface of the Earth then it does not need to accelerate as the effects are already there from the gravitational field.
We are intended to imagine the rocket in space far away from any gravitational field. However, there is no such place in the universe and so no such experiment has been performed and never will be performed.
We have, from the same article:
..it is impossible to decide, by mapping the trajectory of bodies such as a dropped ball, whether the room is stationary in a gravitational field and the ball accelerating, or in free space aboard a rocket that is accelerating at a rate equal to that of the gravitational field versus the ball which upon release has nil acceleration.
This is pure conjecture. This is a thought experiment, the result has been assumed and a theory has been developed with no empirical data or foundational definitions.
There is no such thing as ‘free space’, the whole of space is permeated by a gravitational field. What is the meaning of: “stationary in a gravitational field“? Again, another use of the word ‘stationary’ without reference to a well-defined coordinate system.
What is the solution?
We can go on like this almost indefinitely but the fundamental problem always remains which is the lack of a well-defined coordinate system in which these events take place. Without this we have no way of defining acceleration or even velocity and since the whole point of the Theory of Relativity is to describe gravitational effects in terms such metrics, it can be regarded as a failure.
It is all very well to criticise something, but such comments will simply fall upon deaf ears unless some sort of alternative is at least suggested.
The ‘Inertial Field Theory’ (IFT)
The post: Gravity as an inertial field outlines an idea that gravity is in fact an ‘accelerating’ inertial field with mechanisms and characteristics of itself that explain the local movement of matter in the cosmos.
Consider that:
A gravitational field has fine grained structure on the scale of the atom
A horizontal component is present
The local structure provides for inertial effects
An accelerative component provides for gravitational attraction via ‘movement’ of the inertial mechanism
The accelerative component derives from the global structure whether it be purely radial or vortex-like in nature
The idea of a ‘uniform’ gravitational field is probably bunk
We can now describe a plausible and at least consistent foundation for a theory of gravitation and provide answers for Einstein’s thought experiments.
What is an ‘inertial frame’?
If a gravitational field has both horizontal and vertical components that are roughly isotropic then we may use this as as the basis of an actual physical coordinate system. The system is uniform only locally and theoretically varies from point to point across the whole universe.
This aspect of the gravitational field is insensitive to ‘uniform’ motion of matter but has a certain accelerative resistance thereby providing for both inertia and gravitational acceleration.
The gravitational field has a fine grained structure of a certain scale and this may be used as a basis for a metric of length and hence velocity and thereby acceleration. We therefore have a coordinate system that is:
Highly local – not global
‘Absolute’ in a sense as opposed to arbitrary or relative
Defined by characteristic physical processes, whatever they may be
Responsible for both defining and implementing the laws of gravity and inertia
Free-falling objects move according to local field conditions only and can be said to be following an ‘inertially straight’ path. This is not a geodesic in space-time as there is no need to suppose a distinct space-time as separate from the local field. This is not necessarily the shortest distance between any two points but is a path determined by local field interaction at every point on the path.
Free falling objects in close proximity form an equivalence class of objects which may be said to be in ‘uniform motion’ relative to each other. Their velocities are all constant relative to the local inertial field and constant relative to each other by definition.
There is no need for an abstract coordinate system anywhere as the idea, maybe surprisingly, doesn’t make any sense. Physical objects are moved around by physical field phenomena and that is all. Any idea of a metric must come from emergent properties of the field characteristics itself. In stark contrast to Einstein’s approach where coordinates and ‘space’ are ‘fundamental’, we have a system where the physical gravitational field is the fundamental and any coordinate or metric is defined in terms of local field characteristics or their effect on ‘matter’.
The field forms an inward spiralling vortex system around the Earth where the rotation at the Earth’s surface is synchronous with the Earth’s rotation, thereby forming a ‘gravitational-inertial layer’ at the surface of the Earth which provides for laboratory conditions. Almost all experiments performed by physicists have been within this layer, thereby giving the impression that such conditions are representative of the cosmos as a whole and that all discoveries have been ‘fundamental’ and universal. The Michelson-Morley experiment was performed within this layer.
The horizontal components of the field give rise to inertia and centrifugal forces. Objects at the Earth’s surface can be said to be accelerating upwards relative to the Earth’s gravitational field, where such acceleration is relative to the downward accelerative component of the (physical) gravitational field.
Any experiment carried out in a free-falling rocket is nevertheless within a gravitational field somewhere and this field provides a physical reference frame for measurements, movement, acceleration and the behaviour of rotating bodies.
What would Einstein say?
I think Albert would approve, he was obviously trying to:
Remove the need for a global coordinate system
Define physical laws locally
Somehow unify gravity, inertia and acceleration
Explain the Michelson-Morley result
Explain rotational motion and centrifugal forces
Come to terms with his own discomfort with the foundational ideas
Unification of inertial and gravitational fields
The gravitational and inertial fields are different components of the same field:
Even in generally-covariant reformulations of these older theories, there will be an inertial field and a gravitational field existing side by side. The unification of these two fields into one inertio-gravitational field that splits differently into inertial and gravitational components in different coordinate systems (not necessarily associated with observers in different states of motion) is one of Einstein’s central achievements with general relativity – Michel Janssen
The motivation is good but the unnecessary introduction of different coordinate systems spoils the idea. The accelerative effect of the gravitational field is always present even if it is not measured. If an observer is freely falling towards Earth, they will not experience any accelerative effect from the gravitational field as they are moving along with the field acceleration. However, there must be some sort of mechanism producing this effect and that physical mechanism is not going to disappear just because the observer is moving along with it.
One idea might be that it is the radial convergence of the gravitational field lines towards the planet which produce such acceleration, in which case an observer can accelerate all they like towards the Earth but the field lines have their own ontology within the theoretical framework and are not going to vanish just because they are being ignored.
Another idea is that it is the ‘curvature’ of the field which produces such acceleration. This curvature diminishes with the inverse square of the distance from the Earth and so can be thought of as producing less acceleration the further out in orbit we are.
Some texts talk about a ‘uniform gravitational field’ in an attempt to simplify the ideas of special relativity, but if either of the above two hypotheses are true then there is no such thing as a ‘uniform gravitational field’, since the acceleration comes from phenomena that derive directly from the radial or curved nature of the field. Try to think that the centripetal effect of a tornado has nothing to do with the rotational nature of the wind! Try to simplify to a flat tornado!
The removal of a global frame of reference
After the development of General Relativity, Einstein wrote:
Why were another seven years required for the construction of the general theory of relativity? The main reason lies in the fact that it is not so easy to free oneself from the idea that co-ordinates must have an immediate metrical meaning
(Einstein, 1949, p. 67).
Einstein failed to do this:
As we will see .., the coordinates that Einstein actually used in his accounts of the twins and the bucket in the 1910s have essentially the same status as those in special relativity. They still have direct metrical significance and still identify and individuate space-time points uniquely. – Michel Janssen
The scheme that Einstein settled upon was to identify ‘space-time’ as representing a global and somewhat ‘absolute’ reference frame but at the same time to allow such a coordinate system to have a curved geometry and to allow such curvature to be produced by some physical (although unspecified) process involving something called ‘mass’.
Thinking about this in a quiet place, we realise that this is just a rephrasing of all the ambiguities and double-speak that plagued the early formulations of special relativity.
Properly handling accelerating frames does require some care, however. The difference between special and general relativity is that (1) In special relativity, all velocities are relative, but acceleration is absolute. (2) In general relativity, all motion is relative, whether inertial, accelerating, or rotating. To accommodate this difference, general relativity uses curved spacetime.
— Albert Einstein: The Meaning of Relativity, p. 58
The idea is ostensibly to use a coordinate system (reference frame) as a basis for defining acceleration as before, but the coupling of ‘mass’ to the geometry of space-time performs the same linguistic trickery as before and effectively re-defines an inertial frame by its propensity to accelerate an object. This is just a rehash of Newton’s force = mass x acceleration but with ‘Force’ replaced by ‘space-time curvature‘, ‘mass’ replaced with ‘the propensity to curve space-time‘ and acceleration with ‘movement caused by space-time curvature‘.
Again, no new physics has been produced and all we are left with is a more complicated way of looking at Newtonian gravitation.
Moreover, the formulation of acceleration as being something like the natural propensity of a mass to move through space-time effectively removes the need to provide any other explanation for such a phenomenon. A physical law is replaced with a ‘natural propensity‘. This is not a new physics but a way of avoiding doing any physics at all!
Example: a geo-stationary space station
Imagine a geo-stationary space station hovering above a laboratory on Earth. The relative velocity of the laboratory and station is zero. There is no relative movement, so are they both in the same inertial frame of reference or not?
Although there is no obvious relative acceleration I think that most physicists would say that they are in different inertial frames and the reason given would be that the station is in free-fall whereas the laboratory is not.
So although frames of reference are theoretically defined in terms of spatial acceleration, none of this really matters when it comes to actual examples and we find again that inertial frames are described in terms of what physicists imagine is happening in physical space.
How do we know that an orbiting station is in free-fall when it has no relative movement let alone acceleration? How do we know that conditions at the surface are different? Not by any observed acceleration between the laboratories that is for sure, but by the overall geometry of the situation and observed difference of behaviours of masses within each room.
Such behaviours are clearly independent of each other and decoupled from any relative acceleration that may exist between the laboratories. Experiments within each room unfold according to the local field conditions within that room and that is all. What does the idea of variable reference frames of reference add to all of this?
Special relativity as an engineering model
Special relativity is defined in the absence of gravity. For practical applications, it is a suitable model whenever gravity can be neglected. – Wikipedia
There is no place in the universe that is without gravity and so we can disregard special relativity as a reliable representation of actual reality. It is not a law of physics, it is not a law of nature and it is not a fundamental principle. It is at best a collection of useful rules of thumb that can be used to address specific physical problems.
As a theoretical framework it is riddled with ambiguities and deficiencies as we have seen and in particular it has failed to define either gravity or acceleration.
Even the idea that it can be used to perform useful calculations where gravity is negligible is surely a joke? How do we know if we can ignore gravity when gravity has not even been defined properly? The equivalence principle says that gravity is indistinguishable from acceleration and is therefore, along with acceleration, effectively unmeasurable and undefinable. We are therefore left asking: “What it is exactly, that can be neglected?”.
General relativity is no better and suffers the same fundamental problem which is that of defining acceleration, gravity, inertia, frames of reference and a global coordinate system.
Attempts to identify gravitational attraction with pure acceleration have failed and at the same time effectively prevent any further enquiry into the nature of the gravitational field, having given the impression that the problem has already been solved in terms of bendy space-time.
The Michelson-Morley experiment
It turns out that light is measured as having the same speed travelling with the Earth’s rotation or against it. This was a surprise at the time and is said to be the motivation behind the development of special relativity.
So how did Einstein solve the problem? Put simply, he just declared the result to be a fundamental principle of physics and manipulated everything else to fit the result that he wanted.
From the definition of special relativity:
2. The speed of light in vacuum is the same for all observers, regardless of the motion of light source or observer. This is known as the principle of light constancy, or the principle of light speed invariance.
This is just garbage, just a crude forcing of the result that was required based upon one experimental result only.
There is no such thing as an inert and empty vacuum as normally conceived since all areas of space are permeated by a gravitational-inertial field. Moreover, since it is precisely these fields that are of relevance here, this should be explicitly acknowledged rather than brushed under the carpet as ‘vacuum’.
One consideration is that the gravitational field at the surface of the Earth rotates with the Earth thereby providing a stable reference frame for the movement of both mass and light. However, the formulation of gravity as synonymous with acceleration effectively excludes this hypothesis from the model and leaves us bereft of any other means of explaining the experimental result apart from declaring a new principle of nature.
A principle is declared and not just for the local conditions in the Earthly laboratory, but for the whole of space everywhere and at all times!
Experimental evidence
Aficionados are adamant that there are many experiments that confirm the truth of the theories of relativity, to great precision. However, closer examination reveals things to be a little more complicated.
The precession of Mercury
The orbit of Mercury is elliptical, but the axes of such an ellipse are not static and rotate over time. This is contrary to the assumed action of a simply radial Newtonian force and needs some explanation.
The ‘solution’ from General relativity is to assume that gravitational effects do not propagate instantly through the space-time framework but do so at a finite speed: the speed of gravity: Wikipedia This allows calculations to be made that seem to explain the motion of the planet.
Note that again the term ‘space-time’ has moved from defining a mere coordinate system to becoming a complete, all-pervasive physical entity which is causal in directing events at a cosmic scale. It is responsible for moving around ‘mass’ through physical space and is in turn responsive to the presence of such mass, thereby altering its curvature.. in order to move such a mass!
John Wheeler summarises:
Matter tells spacetime how to curve, and curved spacetime tells matter how to move – John Wheeler
This should be a massive red flag. The language of causation is used but the causal chain is circular! How do you preserve your own sanity with such an attitude? How does the ‘telling’ happen? What is the mechanism please? How does anything happen at all?
Returning to the precession of Mercury, we need to do some actual calculations within the framework of general relativity in order to prove our point. It turns out that the calculations for the altered orbital were actually performed within the framework of parameterised post Newtonian formalism (Wikipedia).
This framework is in Newtonian in spirit, Newtonian in name and uses the very Newtonian concepts of:
Newtonian gravitational potential
Momentum
Angular momentum
Gravitational potential energy
Kinetic energy
Parameterised post Newtonian formalism is therefore a de facto extension of Newtonian physics. The Wiki post tries to squirm out of this by claiming that is a Newtonian approximation to general relativity, but if all of the computation requires Newtonian type quantities within a Newtonian framework, then what has been gained by calling it General Relativity?
The idea that effects travel through Einstein space-time at the speed of gravity (speed of light) is a MacGuffin employed to distract and give validation to the fashionable theory of the day. We could just as well have said that Newton’s gravity propagates at the speed of light and come up with exactly the same results using exactly the same post Newtonian formalism.
Once again, the theory of relativity is just a more complicated way of doing the same Newtonian physics.
The theory of general relativity is still not well-defined and so no amount of accurate predictions can confirm such a theory as: there is no theory!
Why have things gone so wrong?
Reading back through the post it is evident that the same themes crop up time and time again and that the same basic mistakes are responsible for leading the aspiring theorist astray. Einstein himself started off with good intentions but still thought in the same basic patterns and so ended up in the same blind alleys.
The mistakes arise from a few fundamental assumptions which seemed fine at the time but have proved to be crippling in the development of a consistent cosmology:
Error 1: Physics is downstream of mathematics
Almost all physicists believe this, but it just isn’t true. The idea of a reference frame upon which to hang physical events started out fine but we ended up with a space-time that was physical, curved, dynamic and ultimately causative. This seems inevitable in hindsight as physical reality must always somehow reference such a system in order to travel in a straight line for example and so the coordinate system ends up partaking of physical reality even if only passively.
The solution is to take observed physical events as the basis for a science and any apparent order in the form of a consistent coordinate system to be regarded as emergent from these observations.
Error 2: The world is not ‘Newtonian’
The Newtonian world consists of ‘objects’ moving around in space that is empty apart from a few gravitational forces emanating from those objects themselves. A ‘separation’ is built into reality of space, distance, force and object. Forces emanate from ‘matter’, matter takes prime place in the causal chain and matter is somehow aware of a separate coordinate system. Each element of reality is subject to different laws.
In terms of a solution from field physics, the cosmos consists solely of field interactions at every point in the cosmos, with matter, mass and forces constituting observable and measurable effects which, by virtue of their salience, attain an undeserved prominence in our cosmology. To regard such emergent effects as ‘fundamental’ will clearly result in failure.
Error 3: The innate properties of objects
Mass and inertia are held to be ‘innate’ properties of matter and this distortion percolates down even into relativity. The idea should be considered that both are emergent properties arising from the interaction between matter and field structures, rather than immutable properties of matter itself. This becomes evident in John Wheeler’s statement above where mass and space-time curvature are obviously precisely the same thing, but he can’t quite bring himself to say so for some reason.
Nobody regards ‘friction’ for example as an innate property of matter and so so why regard ‘inertia’ as an innate property of matter?
Error 4: Locality bias
The idea that an experimental result in a laboratory is somehow representative of physics at all points in the universe for all time is a clear bias.
Error 5: The fixation on causality
This is another Newtonian concept, that events proceed in a ‘causal’ chain from some original cause (Big Bang) to the complexity we see at the present. In reality, the entire cosmos evolves as a whole and any perceived ‘events’ are merely emergent and observable effects of such an evolution. To describe such events as ‘fundamental’ and such causal chains as controlled by ‘fundamental’ laws is misleading and again crippling in the formulation of a consistent cosmology.
As an example, consider Wheeler’s statement that “Matter tells spacetime how to curve, and curved spacetime tells matter how to move“. It is evident from this that matter and spacetime move in concert with each other and are effectively synonymous, but the conceptualisation of the two as fundamentally different entities necessitates some sort of physical coupling and the abstract idea of ‘causality’ has been roped in as yet another MacGuffin to cement over the cracks, with no mention of an actual physical mechanism. Such a thing is not thought necessary because the abstract idea of ‘causality’ is so readily accepted.
Error 6: Inability to assimilate an existing paradigm
An alternative to ‘causality’ had already been discovered in the form of the Navier-Stokes equations governing the flow of fluids and gases. Here, there are no separate objects as such to exert forces upon each other, and no distinct ‘events’ to delineate causality. Instead fluids and gases are treated as a continuum whose behaviour is in accordance with a set of partial differential equations. This is as about as far from intuitive as we can get, but nevertheless avoids all of the problems we are seeing. Reality evolves at each point in the continuum according to certain rules and that is all that happens. Any perceived order within the resulting activity is not a fundamental law but an emergent effect only.
Error 7: The Laws of physics are not ‘reality’ and are not fundamental
The laws of physics belong in the right hand column below. They are twice removed from reality and take the form of abstract mathematical equations constructed in order to explain a finite number of measurements derived from a limited number of physical processes. They are not fundamental in any way, shape or form.
Contrast this self-evident truth with the attitude of physicists who are prone to declaring almost any new theory going as ‘fundamental’. Note that Einstein’s framework for relativity started off as merely an abstract coordinate system but quickly morphed into an actual physical process that shaped the entire universe by its causative properties.
Error 8: Linguistic overloading of the term ‘straight line’
The term ‘straight line’ can have several meanings:
Geometrically straight – with reference to a coordinate system
Inertially straight – the unimpeded path of a mass through space
Electromagnetically straight – the path of a light beam
There is no reason that these should all be the same and no evidence that they are. Newton’s 2nd law is the assertion that 1 and 2 are equivalent but without reference to a specific coordinate system. Einstein was so keen on the idea that all 3 were equivalent that he allowed for a curved geometry in order that it be so. The reality is that neither mass nor light are moving through space along a coordinate system but instead moving through a gravitational field and driven only by local physical processes.
Energy conservation
The conservation of energy is widely held to be a fundamental principle of nature (of course it is!) However:
Energy as an abstract quantity is poorly defined
Many physicists will admit that it is not in fact conserved
Energy is frame-dependent in relativity and hence not absolute
In Newtonian physics it is relative to a reference frame which is fixed but undefined
No mechanism is provided for the transmutation of energy from one type to another
Consider two objects in space:
For example, if two objects are attracting each other in space through their gravitational field, the attraction force accelerates the objects, increasing their velocity, which converts their potential energy (gravity) into kinetic energy. – Wikipedia
We need a reference frame to describe acceleration, so imagine yourself as object A whilst object B accelerates towards you. You don’t feel yourself accelerating and you don’t perceive yourself as having potential energy or of converting it to kinetic energy. This immediately adds an asymmetry to the situation.
This is fine from the point of view of gravity and acceleration, but the claim here is that there is now some energy conversion, some physical process, happening at one place but not the other. Even this may be considered valid, but an observer at B will imagine the same situation but this time with the energy conversion happening at A. There is a disagreement as to what actual physical processes are taking place.
The doctrine of relativity will be fine with the velocity and acceleration disappearing at one observer as this is all frame dependent, but if the transmutation of potential to kinetic energy consists of some actual physical process then we are forced to concede that this physical process only ever happens in the other guy’s framework. This sounds like nonsense and so it probably is.
Physicists don’t notice this happening as they have no physical definition of ‘energy’ or energy ‘conversion’ and so have no requirement to say why it only seems to happen to somebody else. However, we do not need to specify a mechanism in order to suppose that one might exist, and that if it does exist, then it must exist in some ‘absolute’ sense if energy is to be transmuted.
To see what sort of mechanism might be in play we note that kinetic energy is really just velocity squared and ‘potential’ is just the position in a gravitational field. The conversion of potential to kinetic energy is now equivalent to that of a mass acquiring velocity within a gravitational field.
This is now an identical argument to the one above concerning acceleration under a gravitational field. There must be some mechanism by which this happens and it must be in effect locally to make objects move. It must therefore be in effect even in the rest frame of the observer, i.e. even when the observer appears to himself to be not accelerating.
The theory of relativity, then, seeks to explain away all mechanisms which may be dependent upon acceleration by simply pretending that they don’t exist or at least will vanish in an appropriate reference frame. This has the effect of limiting, rather than expanding, the number of phenomena that can be explained by such a theory.
Rotational movement
Newton put some water in a bucket, spun it on a rope and watched the water climb the sides of the bucket. He then spent decades arguing with Ernst Mach as to why this should happen, but without satisfactory resolution.
Einstein described what he thought was an equivalent situation but with a globe spinning in space:
Following Einstein’s (1914, pp. 1031–1032) lead, [..] we consider a globe, held together by non-gravitational forces, rotating with respect to the fixed stars, [..] In this case, the centrifugal forces, rather than giving the surface of the water in the bucket its tell-tale concave shape, make the globe bulge out at its equator. – Michel Janssen
Ouch! There is a big assumption here which is that centrifugal forces exist at the cosmic scale in the same way that they do in a laboratory within a strong gravitational field at the Earth’s surface . Observational evidence, however, shows that the bulge of a planet is not uniquely determined by its size, mass and rate of rotation. Our sun, for example has almost no bulge at the equator whilst our moon has a noticeable bulge but little rotation.
Gravitational fields are thought to have some inertial component even by Einstein and so it should be considered that the inertia experienced by Newton’s water could possibly arise from the fact that it is being dragged trough the inertial field of the Earth’s gravity and that it is this inertial drag that gives rise to the centrifugal forces causing the water to climb the sides of the bucket. The water may have its own gravitational field but the Earth’s field dominates the experiment whilst the bucket spins within it.
The situation of a planet in space is completely different. The Earth is not spinning within a strong enclosing field, but its own field spins with it and again dominates proceedings. There is no reason at all to suppose that centrifugal forces will arise during this situation and no reason to connect the rate of spin with an equatorial bulge.
The whole system forms a spinning vortex field and the resulting activity conforms to the laws and patterns of vortex physics.; see the barred galaxy depicted below. The field spirals inwards in a manner similar to a hurricane before stabilising at a fixed radius, within which solid-body rotation occurs.
In the system of the Earth, the planet engages in solid body rotation whilst the gravitational field spirals inwards. A zero-slip condition at the surface gives us the inertial framework we are familiar with and easily explains the Michelson-Morley results if we allow that the propagation of light is not through empty ‘space’ but through the gravitational field itself.
Any equatorial bulge is determined by the dynamics of the vortex system as a whole.
E = mc2
By now, this equation can simply be treated as a joke!
There is no physical definition of ‘energy’ and no direct way of measuring it, merely inferences made from an as yet unproven and undefined theory. There is only a circular definition of mass and again, no consistent method of measuring it (The gravitational ‘constant’). The E in the equation does not mean what most people think and is something called Einstein’s ‘rest energy’; the ‘m’ here is similarly a ‘rest mass’. These are novel, imaginary quantities arising as artefacts of the theoretical framework.
These are quantities derived from a theory which is rooted in:
Considerations of imaginary experiments whose outcomes were invented
An arbitrary decision to set the speed of light to constant with insufficient experimental evidence
Goal-oriented attempts to eliminate any physical differences between acceleration and gravity
A failure to define acceleration, gravity or mass
In popular imagination, the energy described in this equation is real energy that is somehow bound up in the structure of an atom and can be somehow harnessed for the purposes of atomic energy or bombs. However, note that none of the foundational elements of the theory or any of the equations have anything at all do do with the structure of an atom. How then can the theory say anything at all about the energy contained in such an object?
Practical examples of E = mc2
Wikipedia gives some ‘practical examples’ in support of the mass-energy equivalence:
A spring acquires extra mass when it is compressed
A weight acquires extra mass when heated
A spinning ball has greater mass than when it is not spinning
We should expect, given the iconic status of the equation, that they have done due diligence, checked the sources and provided good references to support their claims.
The language used suggests that these experiments have actually been performed and the results measured, however, no citations are given and a quick AI search can find no actual experimental results in support of a single one of these claims!
In addition, the same article contains the following statement:
The “gadget”-style bomb used in the Trinity test and the bombing of Nagasaki had an explosive yield equivalent to 21 kt of TNT. About 1 kg of the approximately 6.15 kg of plutonium in each of these bombs fissioned into lighter elements totaling almost exactly one gram less, after cooling. The electromagnetic radiation and kinetic energy released in this explosion carried the missing gram of mass. – Wikipedia
The language suggests that they actually performed the experiment, that they actually measured the mass and energy of the end results of an atomic bomb explosion!
Accurate measurements of such quantities are clearly impossible. The reference supplied gives an estimated ‘yield’ of 21 kt, but to within an accuracy of 10% only! (Malik) This is not the impression given by the Wikipedia article. To cite this experiment as evidence of the mass-energy equivalence is wholly dishonest.
We still have no experimental evidence for the famous equation.
The constancy of the speed of light
Albert Einstein postulated that the speed of light c with respect to any inertial frame of reference is a constant and is independent of the motion of the light source. – Wikipedia
.. and..
The speed of light is the same for all observers, no matter their relative velocity. It is the upper limit for the speed at which information, matter, or energy can travel through space. – Wikipedia
These both seem like massive overreach given the experimental evidence or lack thereof.
Alternative hypotheses should be sought.
Alternative hypothesis: The ideas described as the Inertial Field Theory (Gravity as an inertial field) are correct and should be explored as possible explanations for the various effects purporting to support Einstein’s proposal.
This theory proposes that gravity is an accelerating moving inertial field which adopts a vortex structure in space and centres upon the Earth. Both matter and light move within this field even in a vacuum and the movements of both are affected by local field conditions. In the case of matter, the field imbues objects with both inertia and gravitational mass, and in the case of light, the speed and direction are very possibly altered.
Laboratory conditions: This field rotates along with our planet and thus there exists a thin layer at the surface of the Earth where a stable field condition provides the laboratory conditions that we are familiar with and within which almost all experiments are performed. The field is roughly isotropic as far as inertia is concerned and ‘accelerates’ towards the Earth to provide gravity. If a beam of light travels the same speed in all directions within any laboratory, then this is not surprising. The light uses the gravitational field as a ‘carrier medium’ and will inherit the velocity of such a field. This is the Michelson-Morley experiment.
The solar system: The stars are said to move according to the precession of the Earth’s axis, but the planets are not seen to do the same, which implies that the whole of the solar system is rotating and tilting along with the Earth’s axis. This is consistent with the notion that the solar system is the centre of a giant cosmic vortex and is undergoing ‘solid body’ rotation similar to that of the centre of barred galaxies (see image below).
The gravity of the solar system therefore forms its own ‘inertial frame’ (literally now) and all movement of matter and light will be in relation to this roughly isotropic field.
Deep space: A free falling laboratory in deep space is not moving relative to any gravitational field, being dragged along by it, and so we expect the speed of light to be constant in all directions.
Gravitational lensing: Light is said to bend around massive objects and this surely implies some sort of interaction between light and a gravitational field. There is therefore some physical process at work as a result of this interaction and it is this which needs a thorough investigation. Simply saying ‘the light is bending because space is curved‘ is again avoiding the question and discouraging further inquiry. Light has a physical ‘nature’ and so does gravity and to investigate these is the duty of the physicist.
No surprise: In all the cases above, we expect light to travel the same speed in each direction, but not for the reasons stated by Einstein but for other, more prosaic considerations, which are specific to the local conditions and arise from some, as yet, unspecified laws of physics that control the interaction between light and gravity.
Geo-stationary orbit: This is more interesting. A geostationary space station is moving at speed transversely to the radial field lines of the gravitational field but is stationary with respect to the radius and thus is subject to an inward accelerating flux of such a field. What do we expect light to do in this situation? Will we see the same speed in each direction? Has anybody measured this?
According to Einstein, the speed of light will be the same again.. because he has declared it to be so! However, the mechanics of the situation are different here and so why should we not expect a different outcome? This does not seem unreasonable.
Summary
This is obviously a real mess, with the whole theory having flawed foundations, undefined terms and insufficient empirical evidence to support the claims. In particular the idea of an ‘inertial frame of reference’ is ambiguous at the very least. This is unforgivable since inertial frames of reference lie at the very heart of the theoretical framework and without them there is simply no theory.
Einstein failed to show that gravity is equivalent to acceleration and failed to justify the constancy of the speed of light in any meaningful way.
We have:
No properly defined coordinate system
Velocity and acceleration are therefore undefined
‘Mass’ is ultimately undefined
No new physics
No mechanisms described
Ambiguous terms
Definition creep
Conclusions drawn from ‘thought experiments’
In addition, if we look for empirical evidence we find:
Exaggerated claims made from little evidence
Too much weight placed upon Michelson-Morley experiment
Failure to consider alternative solutions
Failure to explain the precession of Mercury
Failure to explain or even define rotary motion (Newton’s bucket)
Conclusion: Gravitational fields exist and act via a specific mechanism but the central idea of Einstein is to explain away the effects of gravity by rephrasing it as simply ‘acceleration’, thereby removing any need to describe the mechanism.
The other idea, to simply declare the speed of light to be constant, similarly circumvents the need to describe any physical process by which this might happen. No new physics has been proposed, merely some arbitrary restrictions on how we may interpret measurements.
These are fundamentally flawed ideas and hence the theory can never, ever, amount to anything useful.
The claim that we weigh less at the equator because of centrifugal force is not supported by empirical data. Natural variations of the gravitational field owing to variations in planet density are sufficient to account for the differences in weight. The equatorial bulge is sufficient to account for differences in weight across latitude. The results are consistent with the idea of the Earth creating its own spinning frame of reference relative to which, the planet itself is actually stationary.
The data
I asked an AI engine to give me the values of gravitational acceleration across the globe.
So the variation between poles and equator is the difference between 9.832 and 9.780 which is 0.052 m/s².
I now asked for typical variations across a single latitude
So the difference in gravitational strength between the poles and equator is less than that for the planet as a whole and is equal to the variation across a single latitude. This variation then may not be attributed to a spinning Earth without further evidence.
Equatorial bulge
I now asked the engine to summarise the variation in gravitational field strength according to the bulge of the equator alone.
So to get from the stronger gravity at the poles to the weaker gravity at the equator we take the pole value of 9.832 and multiply by 0.9933 to get the value of 9.766. The difference between these two values is 9.832 minus 9.766 which is 0.066 m/s², that is to say, an even bigger difference than actually measured. There is no need for any additional adjustment to be made here; everything is explained by bulge alone.
Too many variables?
We have a measured variation of 0.05 – 0.07 m/s², across the globe, along lines of latitude and from equator to pole. W have theoretical variations of 0.05 predicted from crustal variations, centrifugal forces and equatorial bulge.
Sometimes a measurement is attributed to crustal variation, sometimes to equatorial bulge and sometimes to centrifugal force, seemingly dependent upon the argument to be made at the time.
This is no way to do science. There are too many variables to be resolved in a few ad hoc experiments and certainly, in the data above, no chance of sensibly interpreting any single measurement or attributing any single cause with any degree of certainty.
Centrifugal force?
The variations in weight at the poles and equator is adequately explained by the bulge of the planet at the equator. There is no need to bring centrifugal force into the equation as there is simply no requirement for it given the data.
If the calculations and measurements above are correct then additional adjustments for centrifugal force will in fact give incorrect results. This suggests that centrifugal forces at the planetary scale are not merely irrelevant but perhaps even non-existent.
A rotating frame of reference
Experiments demonstrating the existence of centrifugal force are all small scale affairs and performed in laboratories within the Earth’s magnetic field, whether at the surface of the Earth or in freefall nearby. The effects seen can therefore be explained by the action of objects moving through an ‘inertial field’ as explained here: Gravity as an inertial field
It is far from obvious, however, that the phenomena of rotation, Coriolis forces and centrifugal forces can simply be transferred from a laboratory to the scale of a planet within a solar system. If scientists claim that they can, then this must be rigorously demonstrated with data and arguments that are somewhat more reliable than the ones presented above.
Attempts to demonstrate the rotation of the Earth by means of a Foucault pendulum are no more rigorous and no more conclusive than those described above: Gravity as an inertial field
If the Earth’s inertial field rotates along with its mass, then there is no centrifugal force to alter the weight of an object at the equator. This is entirely consistent with the above data and even supported by it.
Gravity is a ‘field of inertia’ that accelerates towards the Earth and forms a frame of reference for the kinetic behaviour of all solid objects. Objects moving with the acceleration are in free-fall and experience equal inertial resistance in all directions, implying that the field is somewhat isotropic in this respect.
The field near the Earth’s surface accelerates towards the Earth and rotates around with it thereby providing a local inertial frame of reference that both accelerates towards the ground and moves with the surface. Dropped objects will fall ‘vertically’ as a consequence; they are moving vertically with respect to the (rotating) gravitational field and hence with the ground.
No appeal can be made to either linear or angular momentum as as fundamentals of this framework – they need to be ‘derived from’ the framework not ‘added to’ it.
The mass of the Earth is stationary within this inertial framework which takes upon the aspect of a cosmic vortex within the larger vortex of the sun’s gravitational field.
If there happen to be perturbations in the vortex field then these are transferred to the Earth and will account for the variations in day length (20 mins a day with Venus!). No ‘force’ is needed here to move an entire planet, merely a modulation of the gravitational field which necessarily influences the whole planet regardless of its ‘mass’ (another Newtonian concept).
The atmosphere of the Earth is not dragged around by friction as some claim but is actually stationary (on average) relative to the inertial field at the surface of the planet. Atmospheric pressure is created by the inward acceleration of the vortex as a whole; the ‘vortex principle’. The gravitational field at the surface of the planet provides a frame of reference which is stationary with respect to the surface and the whole weather system operates within this frame of reference.
The centripetal nature of the vortex accounts for the spherical shape of the sun which shows no no significant equatorial bulge.
Gravitational acceleration of objects is merely the behaviour of such objects that are stationary with respect to the accelerating inertial field. Geo-stationary objects are actually accelerating upwards with respect to the inertial field.
Objects acquire inertia according to local field conditions only and so the rotational speed of the Earth around the sun is irrelevant, as is the speed of the sun through space and the properties of distant galaxies.
The field at the Earth’s surface provides an inertial frame of reference. The water in Newton’s bucket rotates with respect to this frame and the effect needs no further explanation.
The field is electromagnetic in nature and permeates all matter. Matter itself consists of electromagnetic field modulations. Inertia arises from the interaction between the two fields and consists of a sort of ‘field drag’.
Imagine that, as a force is applied to a stationary object, the movement of matter interacts with the gravitational field to produce some sort of eddy currents. This ‘electromagnetic friction’ opposes the movement initially but once the currents are established, they will tend to persist and serve to preserve the constant velocity of the object with respect to the field. This is interpreted as ‘momentum’ in classical physics; it will take another force to slow the object down. Momentum, velocity and kinetic energy are all relative to the local field conditions.
Conversely, if a gravity field accelerates past (through) an object, then electromagnetic eddies are formed and the object is dragged along with the field in a manner somewhat analogous to a river dragging a boat or maybe a sponge, downstream.
The concept of absolute space is not particularly useful in this respect as all free movement is relative to the local gravitational field. Konstantin Meyl goes further and claims, with good reasons, that the local field conditions also determine length, time, the speed of light and even geometry.
Q; What happens outside of a gravitational field? A: There is no such place.
Each planet of the solar system is at the centre of a gravitational vortex, with interaction with neighbouring or enclosing vortices being complex and according to the laws of electrodynamics. We should expect, from the point of view of Newton’s gravity, to see odd relationships between the planetary orbits and to suspect the existence of hidden (‘dark’?) energies influencing heavenly bodies.
Our atmosphere remains in a thin layer around the planet owing to gravitational attraction, but how does it maintain an identical rotational speed and why are there not 1000 mile an hour winds at the equator?
The most common explanation from AI searches and physics forums is that the atmosphere is dragged around by friction with the Earth’s surface. This is not credible and is contradicted by everyday observations and common sense.
Some explanations describe a non-slip condition at the Earth’s surface surmounted by a shear layer rising away from the surface, but weather balloons rise vertically and con-trails can be seen stationary above us for many minutes; there is no shear layer.
Others say that, over the millennia, the whole atmosphere has acquired sufficient angular momentum to spin with the Earth, and will maintain such synchrony in the future. There are many problems with this:
The air does not maintain synchrony with the Earth’s surface. Cyclonic structures are the norm, with the wind travelling both slower and faster than, the spin of the Earth, and from both west to east and east to west. Moreover, we see wind travelling north to south and vice versa. In all these cases, the wind is not moved by friction with the surface, but by the laws of aerodynamics.
The eye of a hurricane moves at relatively slow speeds (10-15 mph) with respect to the surface of the Earth. This speed is determined by the dynamics of the hurricane as a whole and not by local friction between the air and the surface, so the hurricane as a whole is somehow attuned to, or cognizant of, the rotational speed of the ground. We have winds with huge speeds in most of the hurricane with parts blowing with the rotation and others against the rotation. Is it really credible, amongst this mayhem, that friction with the ground somehow stabilises the whole system to move approximately in alignment with the Earth’s rotation? Surface friction is clearly irrelevant to most of the cyclone.
A vast amount of kinetic energy is surely lost in storms and converted to heat but after the storm is over, the wind is seen to be travelling in synchrony with the surface again; there is no need for a millennium of readjustment to take place for this to happen.
The (moderate) wind outside my window has abated to leave a remarkably still garden. I did not see a slow return to normality caused by shear stress. How does the air know what ‘stillness’ is? There appears to be some atmospheric frame of reference to which all air returns whenever it is not being pushed around by other pieces of air. What is this frame of reference?
A few mild gusts and eddies now appear in my garden. The air is being pushed around locally by neighbouring masses of air. I see the air move the trees a bit, but I don’t see the trees moving the air at all. The eddies die down, but not because of friction with the ground. The kinetic energy of the eddies has been dissipated by friction within the airflow itself which, depleted of such energy, has then become motionless relative to some local frame of reference. The air ‘knows’ its place.
The solution
The gravitational field of the Earth forms a roughly isotropic field of inertia at the surface of the planet which acts as a frame of reference for all physical laws and all observable activity.
The field accelerates towards the ground, giving rise to gravitational acceleration, weight and atmospheric pressure. If we factor out the acceleration, then the field gives rise to the same inertial resistance in all directions. The vertical (accelerative) component of the field drops off with the inverse square of the distance, but there is also an inertial component which exists both in the vertical and horizontal directions.
The field rotates with the Earth at all latitudes and so the air moves locally as if there were no rotation, as if the Earth were stationary.
Newton’s bucket
In the case of Newton’s bucket, the water will be dragged around to form a dip in the middle but when the bucket stops rotating, the water will settle down to a level surface. Once again there is a sense of a (local) ‘frame of reference’. A rotating solid object will rotate indefinitely owing to conservation of momentum, but fluids and gases behave differently in an inertial field as inertial drag, having a vortex nature, will promote eddies in the molecules of fluid or gas which will lead to internal friction and eventual stabilisation with respect to the frame of reference.
Coriolis forces
The above hypothesis makes quite a powerful prediction which is that there are no such things as Coriolis forces at the planetary scale.
This idea came both as a surprise and shock whilst writing the article and needs addressing. Scientists are adamant that the behaviour of gases, fluids and solid objects are affected by Coriolis forces that deflect the motion objects from a straight line relative to the surface of the Earth and cause pendulums to swing in a plane relative to the ‘fixed stars’.
We need at least to account for:
The claimed Coriolis forces affecting the weather
The motion of a Foucault pendulum
A ball thrown in a rotating room will appear to follow a curved path because it is really moving in a straight line relative to an inertial frame of reference which seems to follow the rotation of the Earth. However, if such a frame of reference really does rotate with the Earth, then any projectile or stream of air at the surface of the Earth will travel in a straight line where ‘straight’ is, by definition, aligned with the Earth’s rotation.
This is said not to happen, with both streams of air and large pendulums claimed to align, not with the Earth’s rotation but with some other frame of reference, either an ‘absolute’ frame (mechanism not supplied) or with respect to the ‘distant stars’ (mechanism not supplied).
The Earth’s gravitational field seems locally almost identical at each point on the surface, but we cannot rule out that there may be slight variations in the horizontal component that may vary slightly across latitudes and be responsible for meaningful variations in movement over long distances or time intervals.
Before thinking about this, however, we need to check what sort of variations we are required to explain.
Coriolis forces and the weather
A Coriolis force is assumed to arise from the phenomenon of ‘momentum’ which in turn is a derivative of inertia and if the whole gravitational field is stationary with respect to the Earth’s rotation, then ‘inertia’ is also aligned with the surface movement.
I made some attempt to find out if there really are such things as a Coriolis forces affecting the weather, but got bogged down in circular arguments, ‘arguments from assumption’ and downright contradictions.
I asked AI to explain whether Coriolis forces really did affect the weather. The answers look like they are drawn straight from discussions on physics chat forums.
Cyclones (low-pressure systems) rotate counter clockwise in the Northern Hemisphere and clockwise in the Southern Hemisphere. This rotation is not due to wind patterns alone—it directly results from the Coriolis effect acting on large-scale air movements.
But there are cyclones near the equator and both clockwise and anti-clockwise systems exist in the northern hemisphere.
The Coriolis effect is necessary to explain the direction of rotation; without it, wind would flow directly from high to low pressure.
This is just not true. Stir a cup of tea and you will create a vortex. The pressure gradient goes from high at the periphery to low at the centre but the flow of water is almost at right angles to the pressure gradient and never along it. The same is true of cyclonic structures in the atmosphere.
Trade winds blow from the northeast in the Northern Hemisphere and the southeast in the Southern Hemisphere—again, due to Coriolis deflection.
They may well do this but where is the proof that is is caused by Coriolis forces?
Jet streams—fast-moving air currents high in the atmosphere—also follow curved paths influenced by the Coriolis effect.
Again, we would like some sort of argument to show that the Coriois effect is causal here. An air current cannot just be influenced to follow a curved path; the air either side of it must have somewhere to go to and wherever it goes to must also move the air somewhere else to make room for the new air. The system is organised globally as a series of vortices, this being a necessity for the preservation of topological continuity. The vortex structure dominates the flow patterns and it will be hard to discern or quantify any Coriolis influences within this pattern, particularly when the vortices go round the ‘wrong’ way.
Rotating tank experiments simulate Earth’s rotation and show how fluids (like air or water) develop spiral motion due to Coriolis-like forces.
Yes, but these are rotating tanks within a stationary frame of reference (gravitational field). The whole point of the above arguments is that the Earth’s inertial field rotates of itself, is stationary with respect to the surface of the Earth and therefore not rotating at all for the purposes of laboratory experiments.
The statement “Rotating tank experiments simulate Earth’s rotation” pretty much assumes the thing that is to be proved, which is that small scale experiments can be scaled up to the size of the Earth; they can’t. However, it isn’t the scale that is the problem but the nature of the gravitational field; it cannot act both as a reference frame for laboratory experiments and for the whole planet itself at the same time.
These experiments reproduce cyclonic patterns similar to those in Earth’s atmosphere.
Yes, but cyclonic patterns are produced by the laws of fluid flow and need no rotational impulse to get started; just try preventing water forming vortices and see how far you get.
Major ocean currents (e.g., the Gulf Stream, the Kuroshio Current) follow curved paths and rotate in large gyres consistent with Coriolis deflection.
The movement of ocean currents are very heavily influenced by the shape of the land masses, convection currents and the laws of fluid dynamics.
The Coriolis force is described mathematically in the equations of motion for rotating systems (e.g., the Navier-Stokes equations).
This is theory, not observational evidence, and the whole point of the argument on this post is that the theory is inapplicable, as the Earth is evidently not a ‘rotating frame of reference’, but a ‘stationary frame that rotates’ (within the solar system).
Reminder: Classical theory has yet to explain just what a ‘rotating system’ is rotating relative to; ‘absolute space’ doesn’t really suffice as a get-out clause any more.
Foucault’s pendulum
The rotating plane of swing of a Foucault pendulum is often cited as a triumph of scientific achievement and is claimed to prove:
That the Earth is round
That the Earth is rotating
That the Earth is rotating at a specific rate
That the Earth is rotating with respect to some fixed frame of reference
That the laws of Newtonian physics hold
A single experiment clearly cannot prove all these things at once.
Furthermore, from the Wikipedia article and associated Talk tab, we have:
No pendulum has been seen to complete a single revolution in a single day
A pendulum at the equator is claimed to not rotate at all but this experiment has never been performed
An experiment at the South Pole initially showed the Earth rotating the wrong way round: [link]
A second experiment gave a rotational period of 12 hours instead of 24
Further experiments achieved a rotational period of 24 hours ± 50 minutes
Results deemed to be incorrect were discarded and ‘refinements’ (unspecified) made to ‘improve’ the results
Experiments appear to be ‘goal oriented’
The results they are aiming for assume a spherical Earth, but the Earth is ‘oblate’
The only data claiming to be accurate at other latitudes comes from Foucault himself and he can hardly be said to be impartial.
Only a single latitude was attempted
The swing of the weight is heavily influenced by air currents and initial conditions
An attempt to reproduce Foucault’s experiment demonstrated an initial planar swing degenerating to an elliptical pattern after only an hour
No pendulum will swing all day without ‘help’
There is no quality control on the manufacture of the equipment and one pendulum simply snapped and fell to the ground
A pendulum at the equator would provide a good control but nobody has tried this
A series of precise and reproducible experiments using the same equipment at multiple latitudes is required but never even attempted
Publicly displayed pendulums are made to knock down skittles (see image above) which allows the possibility of controlling the precession to some degree
We frequently see theoretical predictions masquerading as experimental results. For example: “A Foucault pendulum at 30° south latitude, viewed from above by an earthbound observer, rotates counter clockwise 360° in two days.” How do you know this if it has never happened?
“Heike Kamerlingh Onnes performed precise experiments and developed a fuller theory of the Foucault pendulum for his doctoral thesis (1879). He observed the pendulum to go over from linear to elliptic oscillation in an hour. By a perturbation analysis, he showed that geometrical imperfection of the system or elasticity of the support wire may cause a beat between two horizontal modes of oscillation.” – Wikipedia
The plane of swing is affected by an eclipse
The amplitude of swing is affected by an eclipse
The eclipse effect is ridiculed on the Talk page but without further explanation
The ‘fixed frame of reference’ with respect to which the pendulum is assumed to maintain its plane of swing is never clearly identified, nor any mechanism by which a pendulum might interact with it.
Conclusions from experimental evidence of Coriolis forces
The arguments for Coriolis forces at the planetary scale and the scant evidence from Foucault pendulum experiments are insufficient to support the historic claims made for them and at the same time do not contradict the idea of a gravitational field acting as an inertial frame of reference which is stationary with respect to the surface of the Earth.
Gravity as an electromagnetic field
The nature of the field can be largely derived from everyday observations as above, but we can consider the idea that it is in fact an emergent property of an electromagnetic field and equivalent to the sum of all the magnetic dipoles of all the spinning charge comprising the planet. This will provide further insights.
If this is true then the gravitational field is continuous with all the atomic charge fields and hence its movement must necessarily be continuous with the rotation of the Earth. Such a field will have complex, fine grained structure and although diminishing according to radius in the manner of a Newtonian field, will not consist of a simple radial field but will have meaningful horizontal components which give rise to inertia.
Konstantin Meyl posits such a field with his Theory of Objectivity and allows for nothing else existing in reality apart from such a field. A ‘field’ in physics is described by differential equations and obeys the Locality Principle, meaning there is no action at a distance and that all behaviour is determined completely by strictly local field interaction.
It follows from this that the behaviour of water in a spinning bucket is determined solely by local (gravitational) field conditions and is unrelated to any influence from the distant stars or from any such thing as ‘absolute space’. There is no provision within the field equations for any external influence and no need for an independent frame of reference as the field itself provides its own reference frame which is usually of a toroidal geometry.
Newton claims that a body will move in a straight line unless acted upon by a force, but singularly fails to define what is meant by a straight line. From the perspective of a field theory then, we can now invert this proposition and actually define an ‘inertial trajectory’ as that of an ‘unimpeded solid object in an inertial field’. So even geometry is now defined by an observation as opposed to an abstraction.
This formulation has the added attraction that it defines things in terms of observable and hence measurable reality, with no need for the assumption of superfluous variables or entities. Passive gravitational mass is not measurable and the assumption of an ‘absolute’ frame of reference is not only unprovable, but now necessitates an additional explanation as to how such a reference frame should influence physical reality.
A further advantage of the adoption of the description of reality in terms of a single field structure is that it narrows down the possibilities, thereby restricting speculation and discouraging the unrestrained invention of novel and often inconsistent mechanisms.
A complex gravitational field
If we accept the general idea of a field model then there is no such thing as an absolute frame of reference and there is no such thing as action at a distance. All influences are via local field conditions only and so a pendulum is moving with respect to a frame of reference created by the gravitational field itself.
The gravitational field can be seen as an extension of the electromagnetic field of all the matter in the planet and as such will rotate with the Earth and will obey the laws of electromagnetism, which are complex, asymmetric, non-linear. The underlying equations are nothing like the simple radial field of Newtonian gravity but will produce something like a radial field on large scales thereby giving the illusion of something much simpler.
The temptation to imagine these laws operating within some Euclidean space should be resisted. The field at the surface of the Earth operates within the much larger vortex structure of the Earth’s sphere of influence and it is this larger vortex that actually determines the global geometry and no doubt contributes to the local field conditions at the surface.
A self-consistent paradigm
From one point of view, if a pendulum has an apparent deviation from the ‘straight’, then it is subject to some acceleration. However, if we define ‘straight’ as the path actually taken, then no ‘real’ acceleration takes place. ‘Physical straight’ and ‘geometric straight’ are now quite different concepts. Acceleration is ‘the action of an inertial field‘ as opposed to ‘a change in motion‘.
This makes perfect sense and leads to an improved and self-consistent science.
Newtonian and other theories claim matter, mass, distance, position and time as ‘fundamentals’ of the framework, but mass is unmeasurable, the idea of a straight line is undefined, time is ambiguous and even the idea of ‘position’ is unclear (position with respect to what, exactly?). In all cases, quantities are assumed to be relative to some absolute framework that can never be directly measured and is merely imagined.
To use a field construct as a reference frame, however, leads to a self-consistent theory consisting of a theoretical equation for the behaviour of the field and a set of measurements taken from actual reality.
Free movement (free-fall) is that which takes place according to the laws of an inertial frame and is driven by such a frame. A straight line is that followed by a free falling object. The parabolic path taken by a thrown object is inertially straight but geometrically curved because the observer is continually accelerating against the inertial field. The laws of geometry and movement are those of a local electromagnetic field shaped by an enclosing vortex structure.
Applied forces can ‘accelerate’ objects against against the inertial frame. Geometric movement is that which is determined by relative distances, where such distances are themselves determined by the intensity of the field. Geometry itself is determined by the field structure and ‘mass’ is a simplified way of quantifying a vortex; a single metric for a complex structure.
Movement and acceleration are now described in terms of actual physical processes as opposed to deriving from an abstract geometry that resides in some other-worldly realm of ideal forms.
Newton’s first law
A body remains at rest, or in motion at a constant speed in a straight line, unless it is acted upon by a force.
The weakness of the law is now easily seen. The concepts of ‘straight line’ and ‘constant speed’ are ill-defined and so the law makes no sense.
To define these concepts we need some frame of reference by which to compare ‘speed’ or ‘straight’ and no such frames have been adequately described. Newton advocated for some ‘absolute’ frame of reference whilst Mach preferred to compare the local motion of objects to the distant or ‘fixed’ stars, but neither of these is really satisfactory from a practical point of view since neither reference frame is available for direct measurement. Both are simply ‘terminology’ without any real meaning.
As for empirical verification, we can try to find an experiment demonstrating the truth of the First Law; we can look for an object travelling though space in a straight line forever, but no such experiment exists. All objects in space are observed to travel in curved orbits of some sort and all are therefore inferred (from the first law) to be subject to the ‘force’ of gravity.
The reasoning is circular and the idea of an object travelling in a straight line, free from force, is redundant, since no such thing can ever occur in a universe permeated by gravitational fields.
The frame of reference must be the local gravitational field itself; this is by now ‘obvious’.
The Tamarack mines experiment
A wire was measured at the surface of the Earth and again at the bottom of some mineshafts where it was found to be considerably shorter. The reason given by Meyl is that the horizontal component of the magnetic field grows stronger for a small distance towards the centre of the Earth and it is this phenomenon that literally shrinks the wire by manipulation of the physical geometry.
Gravity is therefore more complex than a simple radial field emanating from the centre of a mass.
The sun is said to have very little equatorial bulge despite its large size and gaseous composition and rotates at different speeds according to latitude. This seems at odds with classical physics but makes perfect sense when viewed through the lens of vortex physics.
The sun is the centre of a rotating gravitational field and the surface of the sun is continuous with such a field. The field accelerates inwards and forms one ‘radius’ at the surface and possibly another at the chromosphere. The shape of the sun is determined by the overall configuration of such a vortex which obeys the laws of electrodynamics. Meyl gives a description of an electron as being stabilised by the weight of the whole universe compressing inwards and points out that the sphere is the most stable shape that could possibly result from this.
The same no doubt holds for larger objects and the sun, being gaseous and hence more easily shaped by a gravitational field than a solid planet, ends up being more spherical instead of less.
The gravitational field of the sun rotates with the surface and hence forms a stationary inertial frame of reference with respect to the surface, as with the Earth. There is a big difference here, however, which is that there is no solid body rotation on the sun but a differential rotation that varies with latitude. The question then arises: “What is the behaviour of a Foucault pendulum at the surface of the sun?”. Exercise for the reader!
The Moon and Jupiter
Jupiter has a fast spin and a large equatorial bulge and so this bulge is attributed to the rapid spin. However, the moon has a large equatorial bulge but no spin and so the bulge is attributed to something else other than the spin. The sun has a large mass and size and significant spin but no equatorial bulge but nobody understands this. An obvious inference is that the equatorial bulge is simply unrelated to the mass or spin of the planet in question.
Variation in day length of the Earth and Venus
The rotational speeds of both the Earth and Venus vary from day to day, with the day length of Venus varying by up to 20 minutes. How does this happen?
One explanation is that there is an exchange of angular momentum between the interior of the planets and their surface. In other words, molten iron sloshes around and alters the rate of spin as an ice skater might do by changing her moment of inertia. This is hardly credible, it would mean the transference of angular momentum by mechanical means which would surely lead to all sorts of stresses in the crust of the planets, with tidal waves and earthquakes being an inevitable consequence?
It must be the case that the planets are affected in every single atom at the same time and this implies an inertial field. Each planet is at the centre of an extended gravitational vortex with the vortex having slight fluctuations of rotational speed. Again, this sort of thing is visible in the eddies in river currents. This requires some explanation in Newtonian physics but is to be regarded as default behaviour in vortex systems.
‘Oumuamua
‘Oumuamua and other objects are observed to accelerate away from the sun, apparently against the (Newtonian) gravitational field and various hypotheses are put forward to explain this. A better way to proceed might be to consider a more complex version of the gravitational field as described above and a more complex form of interaction than merely ‘attraction’. It has already been hypothesised that gases may interact differently to solids in a gravitational field and we may be seeing, with these objects, a different form of behaviour again.
Many of these visitors to our solar system have the appearance of energetic field vortices akin to a ball lightning phenomenon. A spinning vortex of pure electric field accumulates energy and matter continually according to the vortex principle and propels itself through space in a manner similar to a smoke ring. Once close to the sun, the dynamic electromagnetic field structure interacts strongly with the gravitational field of the sun and the resulting forces now dominate the movement of the ‘object’. The local gravitational field conditions and the dynamic field structure of the object itself will both contribute towards the movement and again, an analogy with ball lightning is appropriate.
These objects use their internal electrodynamics as an ‘engine’ to drag themselves through a gravitational field. Energy is dissipated in the form of light and matter but they are, nevertheless, at the centre of a larger vortex structure and will continue to accumulate energy as they move through the cosmos. If they did not continually ‘refuel’, then how are there any of them left in the universe?
How do these objects arise in the first place? They arise as spontaneous concentrations of vortex ‘energy’ much the same way that a local vortex may form in a flowing river from the spontaneous confluence of global currents.
Very likely many unidentified aerial phenomena are of this nature and will exhibit complex behaviour in the vortex wake of an aeroplane.
The Michelson Morley experiment
In the Michelson Morley experiment, two perpendicular beams of light were found to travel at the same speed despite the rotation of the Earth and its orbit around the sun. This result is consistent with the idea that the gravitational field at the surface of the Earth is not only inertially stationary with respect to the Earth but also forms a locally isotropic reference field for electromagnetic propagation.
This isn’t too far fetched. A gravitational field is hypothesised to be essentially electromagnetic in nature and photons are some sort of propagating electromagnetic field. The gravitational field therefore acts as a sort of carrier wave for the photons which adjust their speed according to the local environment.
If this is true then gravitational lensing effects are to be expected and these are indeed observed. The gravity in these effects is not acting as an inertial field upon ‘mass’ but as an electromagnetic ‘medium’ which determines the speed of propagation of the photons.
The Lense-Thirring effect
The Lense-Thirring effect is usually described in terms of general relativistic ‘frame dragging’ where a rotating body such as the Earth will ‘drag’ some space-time around with it (how?), thereby affecting the movement of objects and the propagation of light.
This can obviously be reformulated in terms of a pure vortex structure where both Earth and its inertial (gravitational) field rotate as a single body and give us the effects described. In terms of Newtonian or Einsteinian physics, the Earth has angular momentum because of its rotation and this is no doubt the instigator of the dragging. However but the frame of reference with respect to which the rotation is defined is never specified and so we ought not to be assuming that it exists.
We are not therefore able to say with any certainty that it is the frame that is being ‘dragged’, but only that the inertial field and surface movement are continuous with each other. The two move as a whole and it is quite wrong to attribute cause to one or the other when there is no need to do so and no evidence for such a phenomenon.
Summary
An alternative way of thinking about gravity has been described, first in layman’s language and derived from simple everyday observations and experience.
Next, a hypothesis for a gravitational field based upon an electromagnetic field has been shown to be consistent with the theory and to provide additional insights.
Thirdly, multiple known ‘anomalies’ which are incompatible with classical theory are given plausible explanations with respect to this new theory.
The idea of Coriolis forces at the planet’s surface is contested and the evidence from pendulum experiments found to be insufficient to prove anything either way.
The local gravitational field has horizontal components as well as radial and forms a defining frame of reference for the local movement of matter and indeed the propagation of light.
The formulation of gravity as a ‘force’ that acts upon the gravitational mass of an object is not supported by experimental observation and leads to theoretical absurdities. The ideas of force, mass, acceleration and even ‘movement’ are ill-defined, vague and not experimentally verifiable.
This post points out the anomalies, the redundancy of the concept of gravitational mass and the inadequacy of Newtonian theory even as a practical measurement system. An alternative way of looking at gravity is proposed which is intuitively superior, theoretically consistent, computationally identical to Newton’s theory, eliminates superfluous variables and provides for a definition of ‘movement’ (and hence ‘acceleration’) as being relative to the local gravitational field.
The narrative
The accepted mechanism of Newtonian gravity is that all objects possess an intrinsic property called ‘gravitational mass’ and that the Earth’s gravity acts upon that mass to produce a ‘force’ which pulls the object downwards. The more mass, the greater the force, which means that one object having twice the mass of another will experience twice the downward force. This downward force results in an acceleration of the object towards the Earth.
All objects fall with the same acceleration
There seems to be experimental evidence that all objects released above the Earth’s surface will fall to the ground with the same acceleration regardless of their presumed mass and that any difference in their speeds is down to air resistance only. Wikipedia
Since all objects in these experiments behave identically regardless of their (gravitational) mass, we cannot deduce anything at all concerning the mass of an object by observing the acceleration of that object in a gravitational field.
We cannot empirically verify the relationship between gravitational mass and downward acceleration because there is no measurable relationship.
This is unarguable.
Theoretical concerns and ‘inertial mass’
Newtonian theory now suggests that there exists another type of mass, an ‘inertial’ mass which ‘resists’ the hypothetical downward force from gravity in exact proportion to such a force. This is the explanation as to why all objects fall with the same acceleration despite having different masses; the inertial mass and gravitational mass are the same and so they both cancel each other out: NASA
From NASA: “(The theoretical) mass of the object does not affect the motion“
Mass is irrelevant according to both theory and experiment
So according to theory, the acceleration is constant and independent of mass. Moreover, according to experimental findings, the acceleration is constant and hence independent of mass.
We therefore have a theory of gravitational mass that has not been verified by experiment and where such experimental verification is actually ruled out by the theory itself!
Therefore, there is not and cannot be, any meaningful discussion of the effects of something called ‘gravitational mass’, because there are no such directly observable effects and nor can such effects be inferred from theory.
Gravitational mass cannot be said to ‘exist’ in any meaningful sense of the word and it follows that the gravitational ‘force’ that is said to be associated with it cannot be said to exist in any meaningful sense of the word.
The downwards acceleration cannot be said to be caused by a ‘force’ and cannot be said to be connected to such a thing as gravitational mass.
The uselessness of Newton’s second law in this respect
The NASA paper gives Newton’s second law of motion as somehow describing the motion of a free falling object:
force = mass x acceleration
This looks more like a definition of something called a ‘force’ than an equation telling us how an object moves, but we can rearrange it to look like this:
acceleration = force / mass
But the NASA paper concludes: “The mass, size, and shape of the object are not a factor in describing the motion of the object“.
We have a nice looking equation, but what use is it? In order to calculate the acceleration we need first to know both the force and the mass. However:
The mass cannot be determined empirically (see above)
There is no way to directly measure the ‘force’ on a free falling object
The acceleration has been empirically determined to be the same for each object
The Newtonian system is formulated around the idea of mass and force as fundamentals and wants to use these as a basis from which to try to calculate secondary quantities such as acceleration. The force and mass are assumed to be the ’cause’ of the acceleration.
However, the only quantity here that is directly measurable is that of acceleration and so why not take this as a fundamental of the system and derive the other quantities from it? The problem is that the acceleration is constant, which means that if this is the only thing that we can measure then there is no chance of deducing anything at all concerning the other quantities and no way to verify Newton’s laws as applying to falling objects.
The ambiguities of Newton’s first law
Newton’s first law from Wikipedia: “A body remains at rest, or in motion at a constant speed in a straight line, unless it is acted upon by a force.”
This is where the problem lies.
It is simply decreed without justification or precise definitions that if a body is accelerating, then there must be a force acting upon that body. A free falling object is therefore assumed to have a force acting upon it and so even though no force is felt and no force is measurable, a force must be conjured from thin air; the result is the ‘gravitational force’.
Moreover, what does it mean to say that a body ‘remains at rest’? At rest with respect to what exactly? Any object at the Earths surface is said to be rotating with the Earth at thousands of miles per hour and is moving through space at even greater speeds. No object that is observed to be at rest with respect to the Earth’s surface can be honestly said to be ‘at rest’ and so what does the term mean? What is meant by ‘motion in a straight line’ under these circumstances?
What is ‘position’?
There seems to be an implicit assumption that the physical world is superimposed upon some Cartesian grid which serves as a reference frame for position and hence velocity and acceleration, but no such construct has been shown to exist or to be empirically measurable and therefore deserves no place in a theoretical model of the physical world.
Other theoreticians imagine that ‘position’ can somehow be measured with respect to the distant stars and galaxies, but at the same time say that these do not have fixed position and are in fact moving away from us at ever increasing speeds.
Consider what happens when an object is ‘dropped’ in a free falling space station, it doesn’t move with respect to the observer and so cannot be seen to have any forces acting upon it. Advocates of Newton will say that it does have forces upon it and that these are causing it to accelerate towards the Earth. However, the astronauts will not feel any forces upon themselves, cannot measure such forces, cannot directly measure their own acceleration, will not be able to relate any movement (there is no observed movement) of the object to the mass (mass is unmeasurable) of the object.
The astronauts will therefore not observe, and cannot measure any force upon the object. We have a ‘measurement system’ where literally none of the required variables can actually be measured.
A system of measurement?
Newtonian gravity as a description of physical reality seems totally inadequate, but what about regarding it merely as a System of Measurement, i.e. a system of well defined measurement techniques and equations to be used to solve practical engineering problems?
Wikipedia defines a System of Measurement thus: “A system of units of measurement, also known as a system of units or system of measurement, is a collection of units of measurement and rules relating them to each other. Systems of historically been important, regulated and defined for the purposes of science and commerce.”
This sounds like a good idea but the problem with the theory of gravity in this regard is that the fundamental ‘measurable’ of the system is the acceleration of the object and not the mass or force. In fact, both the mass and force are shown above to be unmeasurable and irrelevant to the equation of motion.
The acceleration is not just the fundamental measurable of the system, but the only measurable of the system. An equation of motion in a uniform gravitational field reduces to:
acceleration = g (a constant)
No masses or forces are needed here.
If the gravitational field is variable, the the equation remains the same but with a variable value for ‘g’. Moreover, the value for ‘g’ will be determined by first measuring the acceleration of a free-falling object and inferring ‘g’ from the acceleration and not the other way around.
As far as our system of measurement goes, we only need acceleration as a measurable, with both mass and force being secondary (derived/imaginary) quantities.
An argument for the irrelevance of mass
I forget where this idea comes from:
Consider two apples of equal weight falling towards the ground. They fall at the same acceleration. Move them closer together so that they touch and nothing changes. Now glue them together so that they become one large object of twice the volume/weight/mass. Nothing changes and they continue to fall at the same rate; the amount of ‘matter’ present is irrelevant and the acceleration is always the same.
A field of acceleration?
The results so far suggest that the Earth is surrounded by something we might call a field of acceleration, which causes untethered objects to move towards it with a fixed acceleration.
We can think of an analogy with a river which moves objects downstream regardless of their size or weight. No floating object feels that it is being dragged and none feel a ‘force’ pulling them along. However attempts to pull an object against the stream will certainly require the application of force.
The force needed to pull an object up or down the stream is the force needed to overcome the drag produced by the water and will be the same as the force needed to pull it left or right towards a bank. To rephrase, the force is needed to change the velocity of the object relative to the local flow of the water.
We can therefore consider that the force needed to accelerate an object in a gravitational field is proportional to the attempt to move it relative to the local gravitational ‘flow’.
A gravitational field can be thought of as flowing inwards towards the Earth from space and increasing in its accelerative potential as it nears the Earth’s surface according to an inverse square law. It will ‘drag’ any object towards the Earth in accordance with the local field value of at that point.
Problems solved so far
All problems are solved already.
There is no requirement to create a fictitious quantity called ‘gravitational mass’ only to have it cancel out in the math.
The constant acceleration near the surface of the Earth is regarded as a fundamental of the physical theory and of the system of measurement. Moreover, it is in fact measurable!
Experiments performed in a space station or falling lift are now explained naturally without having to find a balance of complex forces in order to explain a floating object. All objects including the observers are in a force-free space and this is evident by the fact that objects simply float around in mid air.
Acceleration and movement are described relative to local field conditions only. There is no need for a Cartesian grid at the base of physical reality and no need to take into account the movement of distant galaxies. Objects move according to the local gravitational field and any deviation from this movement requires the application of a ‘force’ and so a modified version of Newton’s Law is easily formulated:
“A body remains at a constant speed relative to the local field, unless it is acted upon by a force.”
The phenomenon of ‘weight’ is explained by a scales having to drag or push an object upwards against the local (downward) field flow. The phenomenon of inertia is explained similarly by ‘field drag’; the object is being accelerated against the local field and a force is required. We would expect that in a space station or falling elevator, it would be equally difficult to drag objects in any direction, but it would be nice to see some verification of this.
The equality of inertial and gravitational mass implies that the field is somehow isotropic; it is as much effort to drag the object sideways as it is to drag it upwards (prevent it falling downwards). Compare with dragging an object through a river.
If a deformable float is dragged through a river, it deforms, whereas if it is simply allowed to float downstream, it maintains its form. Similarly, if a balloon full of water is allowed to fall freely in a gravitational field, it maintains its shape, but attempts to accelerate it against the field flow by hanging it from a string or pulling it along a friction-free surface, will cause visible deformation.
We feel heavy because every part of us struggles to move upwards against the constant downward acceleration of gravity. Astronauts in space, however, are moving with the local field flow and hence feel no weight; they are weightless.
An overall vortex structure
The field can be thought of as having an overall spherical vortex structure which intensifies towards the Earth according to the familiar inverse square law. Imagine water flowing down a sink hole to get a picture. The intensity of the field is proportional to the acceleration of matter which increases towards the Earth in the same way that a twig might increase in speed as it flows towards the whirlpool centre.
The intensity of the field is at a maximum at the Earth’s surface and then reduces in a linear fashion towards the centre of the Earth to become zero at the centre. This is the same pattern as the vortex flow in a tornado. The field is rotating at the Earth’s surface at a rate of 360° per day and this ensures that objects released above the surface fall directly downwards and do not drag behind the planet’s rotation. Again, a constant acceleration is maintained relative to the field.
‘Field movement’ and ‘acceleration’ are towards the Earth but intensity diminishes towards the centre of the planet so there is no infinite accumulation of ‘field substance’ at the centre. This may seem odd, but compare with the almost universally accepted explanation of a gravitational field which is continuously ’emitted’, with no explanation of how such emission takes place or how an infinite ‘source’ of such a stuff could exist. Moreover, the field is assumed to somehow move outwards whilst pulling objects back inwards by influencing their unmeasurable (non-existent) ‘gravitational mass’.
The understanding of ‘field movement’ is by analogy with a water wave in which the wave itself appears to move in a particular direction with a particular speed, but no linear movement of the water itself is present. The wave ‘moves’ but nothing really goes anywhere and so there is no need for a ‘source’ of such a field and no infinite sink needed to dispose of the excess.
Variable day length
The length of an Earth day varies on timespans of only a few days (Wikipedia). The day length on Venus can vary by up to 20 minutes. Explanations are in the form of either external forces generated by the other planets or internal forces arising from the motion of liquid metal in the planet’s core. In neither case is it explained how such forces can act upon a whole planet at once without causing catastrophic deformation of the crust and consequent earthquakes.
The problem, then is in attributing the variable rotation speed to things called ‘forces’. Given the hypothesis outlined above, we can now consider that the variable rotation arises from variations in the behaviour Earth’s gravitational field itself and it is this field and these variations which affect the rotational speed of our planet.
Gravity pulls objects directly downwards, towards the centre of the Earth, and not at an angle determined by the rotational speed. If we forget about momentum for a moment (too Newtonian), this implies that the Earth’s gravitational field is rotating along with the surface of the Earth and is continuous with it. We could actually say that it is this gravitational field that is ‘causing’ the Earth to rotate, or maybe that the field preserves the constant rotational acceleration in the same way as it preserves the constant linear acceleration of a falling apple.
If we try to explain the variable rotation in terms of ‘forces’, we need huge forces to move the whole planet. However, an explanation in terms of an acceleration field is, by its very nature, independent of the mass of the planet and arises simply from the dynamics of vortex flow. To get a visual picture, watch some eddies in a stream and observe how their local activity fluctuates slightly in response to both the proximity of other eddies and global changes in the flow as a whole.
In classical physics, gravity, energy and matter are all separate entities and the theory of physics is all about describing how these entities somehow affect each other in a meaningful way. In the vortex physics of Konstatin Meyl, however, even electrons and other fundamental particles are formulated as simple field vortices with energy, matter and mass being emergent properties of the underlying field, the same way that a water vortex is not a separate entity of itself, but a manifestation of the underlying properties of water.
The Earth’s gravitational field, then, spirals inwards from the cosmos and at the Earths surface, fine grained structure appears which is interpreted as ‘matter’. This matter is not separate from the field but ‘is’ the field and the rotation of the Earth is not ’caused by’ the field but is synonymous with it. The persistence of rotation arises from the properties of the field and is formulated as ‘angular momentum’ in classical mechanics.
What is ‘momentum’?
The accelerates objects downwards towards the Earth’s surface because the ‘field movement’ or accelerational component of the field is at right angles to the Earth’s surface and moves along with it. The horizontal component of such a field is zero with respect to the Earth’s surface.
A thrown object will maintain a constant speed relative to the Earth’s surface will therefore maintain a constant horizontal speed and this is interpreted as momentum in classical mechanics. Momentum, mass and inertia are therefore not intrinsic properties of a moving mass but illusions created by the interaction between the ambient gravitational field and the field structure of the object itself.
No Cartesian grid?
There is no underlying Cartesian grid to physical reality; all movement and acceleration are with reference to the local field conditions. There is no need to hypothesise some independent entity called ‘space’ and no need to hypothesise any absolute metrics of distance or even time as all of these are not fundamentals of reality but measurement artefacts that are dependent upon both local field conditions and the precise mechanism of measurement.
‘Distance’ is the length of a ruler, a physical object. Such a length will vary according to ambient field strength (Tamarack mines experiment) and so the distance metric will necessarily vary. The overwhelming desire for an invariant form of ‘length’ in the form of an invisible entity called ‘space’ or even ‘aether’ has caused physicists to assume the existence of such a thing with no proof and to the detriment of scientific progress.
Mach’s principle
How does an object ‘know’ when it is rotating? What is its frame of reference and how do centrifugal forces arise?
The frame of reference is the ambient gravitational field and acceleration is relative to this field as in all cases. The illusion of centrifugal force arises from movement against the local gravitational flow, just as with a falling object.
Gravity as an electromagnetic field
The idea that gravity is in fact an electromagnetic field has been floated by several people including proponents of the Electric Universe model and German physicist Konstantin Meyl.
Meyl gives a modified version of Maxwell’s equations to describe the field as the cumulative average of all of the magnetic dipoles of all of the fundamental particles which constitute the body of the Earth and any object within its ambit. Calculations are given in his book “Scalar Waves: A first Tesla physics textbook for engineers” which give quantitative support to this hypothesis.
What is interesting is that descriptions from Meyl based upon a theory at the atomic level, seem entirely consistent with the model described presented above. The laws of physics are the same at all scales of reality and so careful interpretations of macro phenomena can lead to valid hypotheses concerning reality at the atomic level.
A brief note on causality
Newton’s first law: “A body remains at rest, or in motion at a constant speed in a straight line, unless it is acted upon by a force.”
Note the implication here of causality; a force is causing a body to change its customary motion and if a body is changing its motion then there must be a force acting upon it.
How do we test this? How do we quantify the forces and accelerations?
Newton’s second law in mathematical notation:
force = mass x acceleration or acceleration = force / mass
Note the lack of any sort of causality. We just have mathematical equality in equations where manipulation is according to the laws of mathematics and not the laws of causation. The equations can be reversed left to right and divided either side and the ‘meaning’ remains the same.
There is no symbol for ’causes’ in classical physics, but the equations are always interpreted as somehow encapsulating causality. We therefore have a theoretical framework that is incapable of expressing one of the main ideas of its own inception.
This inevitably leads to confusion. How can we ever prove that it is a force which is causing the motion as opposed to the acceleration of a mass which is causing an apparent force? If a force and acceleration are always co-present then in what sense can one said to be ‘causing’ the other?’. If we can get by with a mathematical framework that does not include the idea of causation, then why did we need such an idea in the first place?
Newton has chosen to essentially invent the concept of a force as being somehow ‘causative’ (of a change in movement) in the Universe but he could just as well have decided that ‘acceleration’ was a fundamental property of objects near a mass and that such an acceleration, if opposed, would lead to a measurable force. The mathematical theoretical framework, containing no concept of causality, cannot possibly refute this idea and so we are completely justified in conceiving of a universe where ‘acceleration’ is primal and (inertial) ‘forces’ are a secondary epiphenomenon.
Summary
The idea of Newtonian gravity as arising from a ‘force’ exerted upon a gravitational mass has been shown to be nothing more than an intellectual conjuring trick, with the mass itself acting as the MacGuffin, a beguiling distraction which has nothing to do with the mechanics of the trick itself and is, in this case, not measurable, not observable and not computationally relevant.
A new way of thinking about a gravitational field has been described which:
Eliminates the anomalies of the Newtonian system
Has no surplus variables
Has no theoretically unmeasurable quantities
Is computationally identical to Newton’s system
.. and is therefore consistent with existing experimental results
Is less confusing to think about
Is consistent with the idea of gravity as an electromagnetic field
Is consistent with the bottom-up theory of Meyl
Is consistent with the thought experiments of Einstein
Relies upon local field conditions only
Requires no imaginary Cartesian grid
Defines ‘movement’ relative to the local field
Has been derived from observations at the macro scale
This post suggests an overall toroidal topology for the universe and tries to introduce a most important idea which is that the physical measure of distance is dependent upon (gravitational) field strength (Boscovich, Meyl) and is therefore a function of whereabouts the measurement is made in the universe.
The overall topology considered is that of a torus (below) and the behaviour is that of a continually flowing electromagnetic field as described by Konstantin Meyl. The flow is according to the laws of electromagnetism and itself takes the form of a dynamic vortex structure.
Electromagnetic field movement is continuous and therefore takes on the form of a torus as being the only structure capable of sustaining such a flow. Any other attempt at a continuous flowing vector field ends up having a discontinuity somewhere; see the Hairy Ball Theorem of topology: Wikipedia
The universe consists solely of a flowing electromagnetic field which determines the topology and since the field naturally forms a torus, the depiction of the universe as a torus is justified from this consideration alone.
There is only the Field
There is a strong temptation to imagine an electromagnetic field taking on a toroidal shape embedded within a Cartesian grid system which determines distance and angles, but the task here is to consider that it is the field itself that determines both topology and metric.
The field is not embedded in anything at all; there is no distance metric as separate from that which is physically measured, there is no such thing as ‘space’ that is separate from the field and no such thing as ’empty’ space.
All that is measurable is an electromagnetic field and anything that is not part of such a field is not part of the measurable universe and therefore cannot be said to ‘exist’ in any meaningful way.
The idea of a separate ‘ideal’ universe with nice tidy geometry is just a fantasy.
The electromagnetic field is the entirety of the universe and takes on a toroidal form and therefore the universe is toroidal in overall topology, i.e. the shape of the universe is determined by its contents and is not independent of them.
Physical ‘distance’
If we are not embedded in a Cartesian grid system then how is distance defined?
There is surely only one option; we define it from the physical matter of the universe as this is all that is available to us.
Construct some sort of yardstick and declare it to be one Cosmic Unit (CU) long. Imagine it to be the width of one of the grid squares in the above image and try to think what happens as it is moved around the universe.
Field strength is inversely proportional to the size of the square, with smaller squares having greater field strength. Length is determined by field strength with a stronger field compressing distance accordingly and as a natural consequence, ‘shrinking’ the yardstick to maintain proportion with the grid squares.
As our measurement instrument is moved towards the centre of the torus, the atoms are compressed and the stick physically shrinks, whereas if it moves outwards towards the periphery, then it and all the surrounding physical matter will expand.
To reiterate: all we have as a measurement tool is our physical-matter yardstick. This is our fundamental reference and any idea that it is somehow measuring something else more absolute called ‘length’ is just a hallucination.
All we have available in physics is our observations of physical events, and any relation to an underlying geometric model is mere inference. The theory of physics should start with observations and not with an assumed Cartesian framework with an already existing metric and 3-d geometry.
Is the universe expanding?
Now we are an observer within a toroidal universe and are looking around trying to make sense of things. Some parts of the universe seem to be expanding relative to us and even moving away from us whilst others appear to be contracting or spiralling inwards.
However, if we move to the periphery, where things seem to be expanding, then we will ourselves, expand with the toroidal geometry and find that our home planet is now shrinking relative to us even though we thought it to be constant in size when we were living there.
Moreover, in our expanded state, we find ourselves spiralling inwards much to our surprise and realise that the apparent expansion of the universe as seen from Earth was merely an illusion owing to the fact that our Earthly system is now seemingly shrinking and moving away from the outer reaches of space. This made it seem to us at the time that the universe was actually expanding away from us.
Parts of the universe are therefore expanding relative to us and others are shrinking, but the inhabitants of those parts are unaware of this and presumably imagine themselves to be somewhere near the centre and in an absolute frame of reference.
So is the universe expanding?
Relative to what? There is no absolute measure of distance apart from a yardstick which adapts its size to local conditions and within the universe itself, no perceivable ‘edge’ or defined outer boundary and so the question really doesn’t make much sense.
But .. geometry?
In the diagram below, the triangle on the left has equal sides and equal angles. The sides are each 3 Cosmic Units long as measured with one CU rulers (shown).
The triangle on the right has its base in an area of increased (gravitational) field strength (maybe from a local sun) and so the metre rulers have shrunk. This means that it still measures 3x3x3 CU, but the angles have changed.
Local distances are determined by field strength which leads to a modified geometry. So geometry itself is determined by field conditions and is no longer ‘absolute’.
This is important when calculating the distance to other stars and galaxies. Cosmologists assume some an invariant Euclidean geometry to the universe but this doesn’t hold here and the stars may be much ‘nearer’ than we think.
As a spaceship exits our solar system, the field strength diminishes and the ship expands accordingly to a great degree. Vast ‘distances’ are covered in a very small time.
Platonic forms
Anyone wants to argue that metre rulers are always a metre long needs to think how to prove this. What is your control? What do you measure a metre against if not some other local physical object or waveform?
The whole idea of an abstract and invariant metric is unprovable. Distances are determined by the size of physical objects and these vary according to field strength .
Field strength varies slightly everywhere and as a consequence there is no such thing as a perfect circle or square anywhere in the universe, no such thing as a Platonic form in actual reality or even the expression of such.
Constants such as Pi exist only in an imaginary realm of perfect geometry.
Physical (real) geometry is determined by the laws of field physics and if something looks a bit like a cube it is because of the local laws of physics and not because of the laws of mathematics. The (approximate) cube is a perfect expression of the field equations and not an imperfect expression of a Platonic form.
Black holes
Take a look again at the overall topology and consider that within this structure lie smaller more local structures which are interpreted as stars, galaxies and black holes.
Now depending where you are on the torus you may see half the galaxy heading towards the central singularity and infer a great force emanating from the core and sucking everything in, or you may see a great outrush of matter pouring from an assumed ‘white hole’.
None of these assumptions are any good here and all that is happening is that matter is moving in an inevitable path as determined by the dynamic topology. Matter does not ‘collapse’ in a black hole but merely shrinks accordingly and will expand again when out comes out the other side.
Gravity
Such behaviour near a planet or star gives rise to the phenomenon known as gravity, which again is assumed to somehow ’emanate’ from the star and suck things towards it. Nobody has seen gravity emanate however and so it is permissible to think of it as an inward spiralling of the field geometry.
This isn’t too outrageous a statement and is comparable to Einstein’s bendy space idea except here we have no need for a separation between space and matter and all is a pleasing unity.
Einstein’s spacetime
“Spacetime tells matter how to move; matter tells spacetime how to curve.” – John Archibald Wheeler
Here we have a superfluity of ‘stuffs’ that is common in mainstream physics. How do spacetime and matter communicate with each other in such a fashion and where are the laws governing such an interaction? How is it proved that spacetime and matter are really separate entities? What are the innate properties of ‘spacetime’ that allow it to be manipulated in such a way and how does it ‘move’ matter?
More pertinently we can ask: “What does it mean that space is ‘curved’ and with respect to what exactly?” The whole idea of ‘curvature’ seems to assume the existence of some sort of Cartesian reference grid as separate from the curved space.
Progress has not been made and all that has happened is that the conceptually difficult part has been moved from one place to another in the hope that nobody will notice.
If matter and geometry are so closely linked, we can consider that they are really both manifestations of some other underlying phenomenon and that such a phenomenon is now seen to be an electromagnetic vortex field.
Gravitational lensing
The phenomenon of light bending its way past a massive body now needs almost no explanation.
The gravitational field of a star is no longer to be regarded as a force or even a distortion in spacetime but simply the centre of a field vortex.
The field strength closer to a star will be greater than the strength slightly further away and so lengths ‘increase’ further away from the star. A photon is a finite size in vortex physics and so contracts nearer the star and expands further away. Translation: it follows a curved path.
Space is not bent as there is no such thing as space to be bent, only a toroidal field creating a toroidal geometry.
Earth-sun orbital anomaly
The Earth is said to orbit the sun but the position of the sun is not fixed, being displaced by a distance of over a million kilometres by the gravitational fields of the Earth and other planets. Despite this, the gravitational pull on the Earth from the sun seems to be always towards the sun at the present and never where it was a few minutes ago. (Van Flandern)
Some have interpreted this as the gravitational field from the sun travelling at many time the speed of light in order to reach the Earth in time but nobody has seen a gravitational field ‘travel’ or ‘radiate’ from the sun and in any case better explanations are now available.
There is no unlimited gravity that emanates from the sun but instead the sun and planets move in a coordinated fashion according to an ever changing vortex geometry and as such it cannot be considered that the sun is ‘causal’ in moving the planets or that Saturn is ‘causal’ in moving the sun.
The sun is positioned at the centre of the most powerful vortex and it is this vortex that has the most influence on the solar system as a whole, thereby creating many correlations between the movements of the planets and the position of the sun. However, this in no way implies that the sun itself is the origin of such movements.
The whole arrangement moves as a whole and according to the laws of vortex physics. The sun is moved by the vortex as is Saturn and the Earth itself and any perceived influence of one body directly upon another is merely an illusion.
In addition to this mechanism, we now should concede that photons are travelling from the sun within a geometric vortex and will move accordingly. The idea that light always travels in a straight line through space is now meaningless as there is no such thing as Euclidean space and therefore no such thing as a straight line.
Instead, we have photons moving through a vortex system and whatever location of origin, will impact the Earth in a direction determined by the vagaries of their whole path taken from the sun through the intervening vortex field.
If you want to try and guess their origin from the direction they approach Earth, then .. “Good luck!”. This is like trying to locate the source of a river by standing at the estuary.
The mechanism
How does all ‘matter’ shrink in a strong gravitational field?
In Meyl’s vortex physics, all matter is made from an agglomeration of electrons and an electron is just a stable field vortex with electrical spin and a magnetic dipole. Put such a thing inside a magneto-gravitational field and the radius of the spin reduces so the radius of the electron reduces and all matter then shrinks.
Evidence?
Tamarack mines experiment A long piece of wire was dropped down a mine shaft and it was found to be shrunk by a significant amount, the implication being that it is the increase in strength of the magnetic component of the Earth’s gravitational field that is responsible.
Hafele – Keating experiment Clocks in aeroplanes run at different rates depending upon whether they are travelling East to West or West to East.
The origins
This scheme makes the idea of a Big Bang radiating all the energy and matter in the universe both unlikely and unnecessary.
We don’t know how things ‘started’ or even if there was a ‘start’, but if the general movement is from periphery to centre, opposite to conventional thinking, then it would make sense to think about the origins in a similar manner.
Field ‘energy’ originates as a vortex somehow and immediately starts to spiral inwards. The energy density increases and smaller vortices arise near the centre which will form smaller and smaller vortices in a fractal pattern.
These smaller vortices form galaxies, stars and single atoms that order, with the smaller structures arising from the larger and not the other way around.
The smallest vortices stabilise around the size of an electron and matter has materialised from a pure electromagnetic field. The creation of matter continues throughout the lifespan of the universe an there is no upper limit on the total mass.
It may seem that the universe needs to be exceedingly large at the outset in order to contain enough energy to materialise such matter and that the sheer volume required is enough to counter the argument. This is not the case, however, as there is no objective ‘size’ to the universe at all and all subsequent ‘expansion’ can as easily thought of as being inward expansion as outward.
There is no real expansion, creation or loss, but instead an increasing complexity of vortex structure arising from the inward concentration of field movement.
A Theory of Objectivity
How on earth do we do any science when distances keep changing and we have not even a consistent way of measuring the passage of time?
Meyl has the answer which he calls his Theory of Objectivity. A transformation is made from local coordinates to global, calculation is made in this new objective framework and the answer is transformed back into local coordinates.
There are no fundamental constants and hence no fine tuning of the universe is necessary. Most fundamental constants come from the need to translate between the different ‘stuffs’ and energies of contemporary physics. Once these are reduced to a single set of equations, the problem disappears.
This never happened
The elementary particles
In the chart below, Konstantin Meyl shows the measured masses (relative to the mass of the electron) of the elementary particles and compares them with the values he has calculated from his own field equations.
The correlation is striking and cannot be coincidence.
“Scalar Waves: a first Tesla physics textbook for engineers” – Konstantin Meyl
The periodic table
In this next chart, again from Meyl, the measured radii of the elements from the periodic table are compared with values calculated from the more fundamental field equations. No other informational input is necessary.
The values show precise correspondence at the start of each new electron shell and drift apart slightly as the complexity of calculation necessitates simplification by series truncation.
“Scalar Waves: a first Tesla physics textbook for engineers” – Konstantin Meyl
Avogadro’s number
Avogadro’s Law: “Equal volumes of all gases, at the same temperature and pressure, have the same number of molecules.” – Wikipedia
Avogadro constant: “The Avogadro number is an exact number equal to the number of constituent particles in one ‘mole’ of any substance” – Wikipedia
Simplification: “The same number of molecules take up the same amount of space” (Each molecule is the same size?)
Fixed by decree: “In its 26th Conference, the BIPM adopted a different approach: effective 20 May 2019, it defined the Avogadro constant NA as the exact value 6.02214076×1023 mol−1” – Wikipedia
There is no sensible explanation for this within mainstream physics. The value of the constant cannot be calculated directly from any fundamental theory of gases and so they just decree that the number itself is a fundamental constant of physics, thereby discouraging any attempts to investigate the matter, removing the need for any proposed mechanism and obviating the need for any more measurements of the value as it is already established as a fixed element of the system!
An explanation of the constant and a derivation from Meyl
The obvious inference from measurements is that the molecules are spaced out evenly throughout the volume, being surrounded by some ‘sphere of influence’ which keeps them apart and which provides resistance to compression via repulsive forces. These forces nevertheless allow the molecules to move around (diffusion and flow) with a little resistance (friction).
The only forces worth considering here are electromagnetic in nature and so we need some sort of field structure that creates such a sphere around an atomic nucleus. The field will be some arrangement of electrically negative vortices which are attracted to the nucleus but repel other such structures.
A credible description of the gaseous state of matter
The extra energy in the gas state has caused the eight electrons of the n=2 shell of the Oxygen atom to come out of their usual concentric orbitals to form an eight-fold ring around the outside of the nucleus. The reduced field strength at this distance from the centre has caused the electrons to expand suddenly to many times their original volume.
The electrons stick together via magnetic dipole forces but repel other negatively charged elements. The electrons rotate of themselves and rotate as a ring and this represents a means of energy storage and energy transfer. A cross-sectional view from the north pole is shown but in reality the whole shape is that of a peeled orange with an overall spherical shape comprised of eight segments which are the electrons.
Whatever the original size of the molecule, the volume is now dominated by the size of the expanded electron shell and this is the same for each atom at least. Something similar must be happening with compound molecules.
Gas pressure and Avogadro laws are now explained along with the critical (as opposed to continuous) change from liquid to gas.
Gravitational constant
“About a dozen measurements of Newton’s gravitational constant, G, since 1962 have yielded values that differ by far more than their reported random plus systematic errors. We find that these values for G are oscillatory in nature, with a period of P = 5.899 +/- 0.062 year, an amplitude of (1.619 +/- 0.103) x 10^{-14} m^3 kg^{-1} s^{-2}, and mean-value crossings in 1994 and 1997.” – Anderson et. al.
So not only do measurements vary but they vary with a certain pattern which actually correlates with the varying rotational speed of the Earth:
“Of other recently reported results, to the best of our knowledge, the only measurement with the same period and phase is the Length of Day ” – ibid
Most sources will say that there is and can be no variation at all in the gravitational constant simply because it is declared as a fundamental constant of nature. Any apparent discrepancies in the value must therefore be caused by problems with the measurement method:
“However, we do not suggest that G is actually varying by this much, this quickly, but instead that something in the measurement process varies” – ibid
One possibility mentioned by Anderson et. al. is that the whole process is some how affected by the Earth’s magnetic field:
“Least unlikely, perhaps, are currents in the Earth’s fluid core that change both its moment of inertia (affecting LOD) and the circumstances in which the Earth-based experiments measure G. In this case, there might be correlations with terrestrial magnetic field measurements.” – ibid
Variations in measurements of the gravitational constant – Speake, Quinn
Gravity as an emergent effect of magnetic dipoles
Many scientists including Konstantin Meyl and adherents of the Electric Universe Model have suggested that gravity is really just an average of the electromagnetic fields arising from the constituent atoms of matter.
The field arises from the sum of the magnetic fields of a random assortment of atoms and will consequently become much stronger if the atoms are aligned and regularly spaced such as in a bar magnet.
Meyl gives arguments for the masses of the elementary particles (see above) and calibrates them with respect to the mass of an electron, obtaining very good agreement with experimental results.
So gravity is not fundamental but arising from magnetic fields, with the cumulative effect in macro sized lumps of matter dependent upon the precise arrangement of atoms and possibly the presence of other electromagnetic fields.
The mass of an electron according to Meyl is not fundamental but depends upon the speed of light.
How is the gravitational constant measured?
Good question. The papers cited above merely say that the constant has been ‘measured’ by several different teams. This gives the impression that you can buy a device to wave in the air and get a reading in both metric and imperial units if you are lucky.
This is not the case and what is measured is rotating balls or falling weights, with the gravitational constant somehow inferred from such measurements.
The only physical measurement we ever see in real life is the displacement of a visual marker on some instrument or other, whether it be the hands on a clock or glowing digits on an electronic device. Everything else is an artefact of the model.
To say that the gravitational constant is ‘measured’ is highly misleading; it is interpreted from measurements and according to a theoretical framework. Now if your theoretical framework has this value defined as ‘constant’ and it turns out to be variable then you are already in a bit of a mess.
What is ‘mass’?
There is no consistent definition of ‘mass’. It is held to be fundamental (of course!) and is described as an ‘innate’ property of matter, but the only existing definitions are contradictory and circular.
“Mass is an intrinsic property of a body. It was traditionally believed to be related to the quantity of matter in a body, until the discovery of the atom and particle physics. It was found that different atoms and different elementary particles, theoretically with the same amount of matter, have nonetheless different masses.” – Wikipedia
Oops! Mass is not related to the quantity of matter!
If mass is not related to the quantity of matter and we have no other definition apart from a collection of purported measurement techniques, then how can it be ‘intrinsic’?
“Mass in modern physics has multiple definitions which are conceptually distinct, but physically equivalent. Mass can be experimentally defined as a measure of the body’s inertia, meaning the resistance to acceleration (change of velocity) when a net force is applied. The object’s mass also determines the strength of its gravitational attraction to other bodies.” – ibid
But it is already established that the strength of gravitational attraction is dependent upon the gravitational constant, not just the mass.
We find that mass is defined by various measurement techniques:
Resistance to acceleration (inertia)
Strength of gravitational attraction to other bodies
Power to attract other bodies by its own gravity
These are emphatically not physically equivalent unless shown to be so by experiment and theory. Just saying it is so does not make it so.
Note that all these definitions are by measurement of something other than mass itself. The mass, which is presumed fundamental and declared ‘intrinsic’, is actually a theoretically inferred value from other (measurable) quantities.
Moreover, the strength of gravitational attraction (mass) depends upon the gravitational constant and this has been shown to vary, or at least has not been shown to be constant.
In addition to this we find that calculations of the gravitational constant itself all depend upon knowing the precise values of the masses involved. Therefore: Gravity depends upon mass and mass is defined with respect to gravity.
This is circular self-referential nonsense!
Inertia as mass
The addition of inertia as a definition of mass does not help. This just adds an extra quantity that needs defining, measuring and somehow integrating into an already shaky framework.
How can this be achieved if inertia is absolute but other forms of mass vary? What is the theoretical mechanism that describes how the inertial mass is the same as the gravitational? In what sense then are they ‘equivalent’?
Inertial mass is measured by the force needed to produce an acceleration on an object. It therefore needs an acceleration in order to be manifest and yet at the same time is said to be an ‘innate property of matter‘.
How is this conclusion reached if the mass is never measured with respect to a body at uniform speed? How do we know that the mass of such an object persists at the measured value and what does this even mean?
An analogy with dynamic friction
If this seems like sophistry, first consider the phenomenon of ‘friction’. We have a good analogy as nobody knows how it works and the property of dynamic friction is only measured in moving objects. The frictional properties of stationary objects are different to that of those in motion and both are dependent upon the interaction between the objects.
Nobody thinks that friction is an innate property of any material but varies with speed and depends upon the relationship between the two surfaces. Dynamic friction is only present when motion is involved and disappears when motion ceases. Nobody asks “Where has it gone?” because it is not assumed to be an immutable property of matter.
Lenz’s law
A magnet dropped down a copper pipe will travel much slower than if the pipe were not there according to Lenz’s law.
What has happened to all the mass? If mass is intrinsic then there is some other (magnetic) force acting upon the magnet to oppose the motion. No magnetic field was present in the copper pipe before the motion started and the field of the magnet is not sufficient by itself to produce the slowing down. The force did not exist prior to the experiment and disappeared after it ended. The new property was actually created by the experiment itself.
Again, nobody would think that this retarding force is an intrinsic property of matter, so how can they be so certain as to claim that ‘mass’ is such a property?
If, as suggested above, the gravitational force arises from the electromagnetic field interaction between the field of an object and the field of the Earth then the above considerations are pertinent. The current formulation of the mass of an object as only dependent upon the object itself, however, effectively rules out any investigation of such phenomena.
An empirical definition?
“Mass can be experimentally defined as a measure of the body’s inertia, meaning the resistance to acceleration (change of velocity) when a net force is applied. ” – Wikipedia
This is as confused as a definition can be.
If all that is measured is a resistance to acceleration then that is all that may be deduced. An ‘intrinsic property’ may not be inferred and there should be no automatic conclusion of a similar effect in different gravitational fields.
“The object’s mass (i.e. resistance to acceleration) also determines the strength of its gravitational attraction to other bodies.” How does this work exactly? How can this be deduced? Do we assume that a material with a high frictional coefficient also has the power to attract other objects? No, of course not.
We have several different measurement techniques measuring several different quantities and the claim is that they are all ultimately measuring the same thing, that they are ‘physically equivalent‘. But how can this be justified?
A measurement is just a measurement and a concept just a concept. The concept of mass is just a concept as it can never be measured directly It can be deduced only by the application of external forces and the measurement of movement followed by an interpretation made according to a specific theoretical model.
So two different results are obtained from two different measurement techniques, interpreted according to two conceptually different theoretical frameworks and are then declared to be “physically equivalent“! No. Theoretically equivalent, maybe, but ‘physically‘? No, the phrase has no meaning.
If inertia is simply owing to the quantity of matter present then it cannot possibly be related to mass, according the initial quote from Wikipedia!
Time
There appears to be no consistent definition of time as an independent physical variable.
The rate of a swinging pendulum depends upon gravity and so will change with variations in the gravitational constant and will vary according to its location on the Earth’s surface.
The rate of atomic clocks varies even with two clocks in the same building. They will run at different speeds during an eclipse and even differ according to their alignment with respect to the Earth’s magnetic field. See diagram below.
Meyl: Scalar waves..
Distance
In the Tamarack mine experiment a long piece of wire was lowered into a mine shaft and found to have shrunk considerably. See: Tamarack mines experiment The explanation from Meyl is that a horizontal component of the Earth’s magnetic field increases towards the centre of the Earth and this is responsible for shortening the wire.
The Hafele – Keating experiment showed the opposite effect when distances were measured in a plane flying at altitude; distance was stretched out instead of shrunk.
A simple measure of distance is therefore subject to interpretation and such interpretation will vary according to the model involved.
Attempts to measure distance by wavelengths of light are subject to Doppler shift and again are not direct measurements at all but interpretations filtered through some theoretical framework.
π
Surely the ratio of a circle’s radius to its circumference is a fixed and fundamental constant of the universe?
Alas, no. Pi is a constant in Euclidean geometry but the experiments above suggest that the physical world does not follow the rules.
In the field theory of Meyl, physical length is determined by field strength and so the apparent geometry of the real world is also a reflection of field strength and this is unlikely to give rise to a Euclidean geometry.
There is no proof that the physical world is super-imposed upon a Cartesian grid; all we have are some sort of physical measurements and the Mine experiment shows that our measuring tools do not follow the rules of traditional geometry if gravitational fields are involved.
If we take a long piece of string out into space and measure the radius as we go, we expect to find that the circumference of a circle orbiting the Earth is 2πr but both Meyl’s theory and the Hafele-Keating experiment suggest otherwise.
Geometry is therefore a function of field strength and this will vary continuously through space. The existence of a perfect circle or square in physical space is highly unlikely but the space in which we live is approximately Euclidean over small distances and so nobody has noticed.
Dark Matter
The invention of Dark Matter and Dark Energy with no direct evidence whatsoever of their existence is surely one of the greatest embarrassments of modern science. They have assumed this ‘stuff’ to comprise over 95% of the known universe simply because they have an incorrect model of gravity.
Konstantin Meyl proposes that in addition to gravity there is the possibility of resonant neutrino attraction between individual galaxies and stars to help resolve the matter.
We can note here that if you have no stable concept of time nor distance and have declared gravitational forces to be constant when they are measurably variable and unrelated to the amount of matter, then you are already in Big Trouble.
The speed of light
The speed of light is declared to be a fundamental constant within the framework of Einstein’s relativity. What this means is that whatever speed you manage to measure for light it must necessarily come to the same value. If it appears to be a different value then it is something else that has varied.
‘Speed’ is calculated as distance per unit time but as explained above, there is no consistent definition of either distance or time and so if the speed of light is different from its decreed value then scientists are free now to blame variations in either time or distance according to their whim.
In Rupert Sheldrake’s TED talk: “The science delusion”, he mentions that that the speed of light slowed down by about 20 km/s between 1928 and 1945 before resuming its approved value. The response of the standards authorities was to simply re-define the length of the metre in terms of the speed of light so as to correct for the difference, thereby confirming that distance is no longer a fundamental quantity of physics.
The units of the gravitational constant
The gravitational constant is equal to approximately 6.67×10−11 metres cubed per kilogram per second squared i.e. 6.67×10−11 m3⋅kg−1⋅s−2
We will merely note here that not one of metres, kilograms or seconds has a stable definition and yet they are all assumed to combine together to give a constant value!
The vortex physics of Konstantin Meyl
The vortex physics of Konstantin Meyl contains a single vector differential equation with one ‘constant’ only which he calls ‘c’, by analogy with the speed of light, and which in his framework is the speed of field propagation. There are no other variables within the system with which to compare this value and so ‘c’ may be set to unity without any loss of information.
The whole of physics is described via a single equation which means there are no separate ‘stuffs’ needing adaptation or calibration to one to another and hence no fundamental constants are needed.
Moreover, since there is only one equation, there is not only no need for translation from one set of units to another, but no possibility of any extra units arising and so never any need for constants, ever; there is simply no place for them in the theoretical framework.
Fine tuning?
Nope. There are no constants and therefore nothing to fine tune.
The fine tuning argument has been used to advocate for intelligent design on the grounds that the precise values of the constants we see cannot have arisen by accident whilst atheists prefer to think that the constants are different in an infinite number of different universes, with only the single universe that we inhabit being lucky enough to have the right values.
We now see that the idea of fine-tuned constants arises from an inadequate model of physics and that all those fascinating debates are just a waste of time. Either side could have paused to think that contemporary physics is incomplete and that this is what necessitates the introduction of all these new constants.
References
“Scalar Waves: a first Tesla physics textbook for engineers” – Konstantin Meyl
This post is an AI generated summary of the book ‘Neutrino Power’ from Konstantin Meyl and Johannes von Buttlar.
The document discusses a conversation between Johannes von Buttlar and Prof. Dr. Konstantin Meyl about the experimental evidence of room energy and neutrinos, exploring new physical theories and their implications for understanding the universe.
Discussion on Free Energy Concepts
The conversation between Johannes von Buttlar and Prof. Dr. Konstantin Meyl explores the concept of “free energy,” its implications, and the potential for new energy sources derived from neutrinos and scalar waves. They analyze the limitations of current energy technologies and the need for innovative approaches to meet future energy demands.
The term “free energy” is discussed, emphasizing that all energy sources are ultimately free but come with costs related to extraction and distribution.
Concerns about environmental impacts and the sustainability of current energy sources are highlighted.
Alternative energy solutions, such as wind and solar power, are critiqued for their limitations in reliability and energy output.
Prof. Meyl asserts that a new form of energy exists, which he refers to as “free energy,” and he believes it can be harnessed effectively.
Neutrinos and Scalar Waves
The dialogue delves into the properties of neutrinos and scalar waves, suggesting that these phenomena could provide a new understanding of energy transmission and interaction.
Neutrinos are described as subatomic particles that may have mass and charge, challenging existing scientific assumptions.
Scalar waves are introduced as a form of energy transmission that operates differently from traditional electromagnetic waves.
Prof. Meyl presents experimental evidence suggesting that scalar waves can transmit energy without the losses associated with conventional methods.
The potential for harnessing these energies for practical applications is emphasized, with claims of achieving efficiencies exceeding 500%.
Tesla’s Contributions to Energy Science
The discussion acknowledges Nikola Tesla’s pioneering work in energy transmission and his theories regarding scalar waves, which have largely been overlooked in modern physics.
Tesla is credited with discovering the principles of scalar waves and their potential applications in energy transmission.
His experiments demonstrated the ability to transmit energy wirelessly, which is now being revisited in light of new scientific understanding.
The conversation suggests that Tesla’s insights could lead to breakthroughs in energy technology if properly recognized and developed.
Experimental Evidence and Practical Applications
Prof. Meyl shares details about his experiments that demonstrate the principles of scalar wave energy transmission, providing a basis for further exploration in this field.
The experimental setup involves a wireless energy transfer system using resonant coils, which successfully transmits energy between sender and receiver.
Measurements indicate that the system can achieve efficiencies of over 1000%, challenging conventional energy transfer models.
The experiments are designed to be reproducible, allowing others to verify the findings and explore the technology further.
Implications for Future Energy Solutions
The conversation concludes with reflections on the potential impact of these discoveries on future energy systems and the need for a paradigm shift in energy technology.
The authors argue for a reevaluation of current energy practices in favor of more sustainable and efficient methods based on scalar wave technology.
They envision a future where energy can be harnessed more effectively, reducing reliance on fossil fuels and minimizing environmental impact.
The discussion emphasizes the importance of interdisciplinary collaboration to advance understanding and application of these concepts in practical energy solutions.
The Coupling of Scalar and Transverse Waves
The text discusses the interrelationship between scalar waves and transverse waves, emphasizing their spontaneous transformation and coupling in various applications. This coupling has practical implications in fields such as telecommunications and electromagnetic compatibility.
Scalar and transverse waves can transform into each other spontaneously.
Both types of waves appear in the same wave equation.
Practical examples include the reception of ground waves and broadcast waves using the same antenna.
Filtering scalar waves can reduce measurable field strength but does not eliminate them entirely.
Implications of Scalar Wave Filtering
The conversation highlights the challenges and potential of filtering scalar waves, particularly in the context of mobile phone usage and electromagnetic shielding. The effectiveness of shielding against scalar waves is questioned.
A Faraday cage can filter out transverse waves, allowing only scalar waves to pass.
Filtering methods may not provide complete protection against electromagnetic pollution.
The coupling of wave types means that reducing one type may also reduce the other.
Health Concerns Related to Mobile Phone Usage
The discussion raises concerns about the health implications of mobile phone radiation, particularly the effects of scalar waves on users. The conversation suggests that current mobile technology may not adequately address these health risks.
Mobile phones emit both transverse and scalar waves, with scalar waves potentially being more harmful.
Users are advised to use external antennas to mitigate exposure.
There are reports of increased learning difficulties in children near mobile phone towers.
The Role of Education in Addressing Wave Issues
K.M. emphasizes the importance of educating students and professionals about scalar waves and their implications for technology and health. This education aims to raise awareness and improve technology design.
K.M. conducts lectures and seminars to inform about electromagnetic compatibility and scalar waves.
There is a need for better understanding among engineers regarding the implications of scalar waves.
K.M. aims to influence technology development to minimize biological risks.
Critique of Current Mobile Technology Development
K.M. criticizes the design of current mobile phones, arguing that engineers lack understanding of scalar waves, leading to potentially harmful designs. The conversation suggests that this oversight could have serious health implications.
Current mobile phones are optimized for scalar waves, which may increase health risks.
The trend of shortening antennas has led to unintended consequences, such as increased scalar wave emissions.
K.M. calls for a reevaluation of mobile technology to address these issues.
Historical Context of Wave Physics
The text provides a historical perspective on the development of wave physics, particularly the decline of vortex physics in favor of Newtonian mechanics. This shift has implications for understanding modern physics.
Vortex physics was historically significant but has been marginalized in favor of Newtonian methods.
The inability to isolate and measure vortices has hindered their acceptance in modern physics.
K.M. advocates for a return to vortex concepts to better understand physical phenomena.
The Need for a New Field Theory
K.M. proposes the development of a new field theory that incorporates both vortex and potential waves, challenging the limitations of Maxwell’s equations. This new theory aims to provide a more comprehensive understanding of electromagnetic phenomena.
K.M. suggests that Maxwell’s theory is incomplete and lacks causal relationships.
The proposed hydromagnetic field theory would replace the need for quantum explanations.
This new theory could unify various physical phenomena, including gravity and chemistry.
Understanding the Nature of Particles
The conversation explores the nature of particles, particularly electrons, and their properties as potential vortices rather than discrete entities. This perspective challenges traditional views in quantum physics.
Electrons are described as dipoles rather than monopoles, with both positive and negative charges.
The spherical shape of particles is attributed to the pressure of the vacuum.
The duality of electric and magnetic fields is emphasized, with implications for understanding particle behavior.
Conclusion on the Future of Physics
The text concludes with a call for a paradigm shift in physics, advocating for a more integrated approach that considers both fields and particles as interconnected phenomena. This shift could lead to new discoveries and advancements in technology.
A new understanding of fields and particles could revolutionize physics.
The integration of vortex and potential theories may lead to breakthroughs in various scientific fields.
K.M. emphasizes the importance of re-evaluating established theories to foster innovation.
The Concept of Antimatter and Particles
The discussion revolves around the existence of antimatter, its relationship with matter, and the implications of particle interactions. The conversation highlights the theoretical framework of particles and their antiparticles, particularly focusing on electrons and positrons.
Two possible vortex directions exist: clockwise or counterclockwise, affecting the sign of field indicators.
An electron, with a negative charge, can transform into a positron, which has a positive charge at its center.
Antimatter is theorized to exist in equal quantities to matter, suggesting the potential for entire solar systems made of antimatter.
When matter and antimatter collide, they annihilate each other, resulting in the release of energy in the form of light.
The photon is described as a pair of oscillating electron-positron particles, exhibiting dual properties of matter and antimatter.
The Nature of Neutrinos and Their Properties
The conversation delves into the characteristics of neutrinos, their interactions, and their role in particle physics. Neutrinos are presented as unique particles with specific properties that differentiate them from other particles.
Neutrinos are considered as oscillating ring vortices, possessing a swinging charge that averages to zero, allowing them to pass through matter undetected.
They interact weakly with matter, causing phenomena like beta decay in neutrons.
The model suggests that neutrinos can be harnessed for technological applications, termed “Neutrinopower.”
Neutrinos have no mass and can travel at speeds exceeding that of light under certain conditions.
The Strong Interaction and Proton Stability
The discussion addresses the strong interaction, its role in atomic nuclei, and the stability of protons. The conversation critiques existing theories and proposes a new model for understanding these phenomena.
The strong interaction, or nuclear force, is responsible for holding atomic nuclei together despite the repulsion between positively charged protons.
Current theories, including the introduction of quarks and gluons, are criticized for lacking empirical support and clarity.
The proposed model suggests that protons consist of an electron and a positron pair, leading to a stable configuration that explains their magnetic moment and charge.
The stability of protons is attributed to the internal structure and the arrangement of their constituent particles.
The Role of Faraday’s Law in Electromagnetic Theory
The conversation highlights Faraday’s law of induction and its implications for understanding electromagnetic fields. The discussion emphasizes the need for a new approach to field theory based on Faraday’s principles.
Faraday’s law describes the relationship between magnetic and electric fields, demonstrating that a moving magnetic field induces an electric field.
The duality of electric and magnetic fields is emphasized, suggesting that both can transform into one another under relative motion.
The discussion proposes a new mathematical framework that incorporates Faraday’s law as a foundational principle for a comprehensive field theory.
The approach aims to reconcile existing theories with empirical observations, moving beyond the limitations of Maxwell’s equations.
The Ether Concept and Its Scientific Relevance
The conversation explores the historical and contemporary significance of the ether concept in physics. The discussion critiques the dismissal of the ether and its implications for understanding light and electromagnetic fields.
The ether is defined as the medium through which light propagates, providing a framework for understanding the speed of light.
Historical experiments, such as the Michelson-Morley experiment, failed to detect an ether wind, leading to the rejection of the ether concept.
The discussion argues for the ether’s relevance, suggesting it as a necessary component for explaining electromagnetic phenomena.
The ether is posited as a field that influences the propagation of light, with implications for understanding the nature of space and time.
The Nature of Light Speed
The discussion revolves around the concept of light speed as a variable rather than a constant, challenging traditional physics. The implications of this perspective suggest a need for new mathematical transformations to describe motion between different inertial systems.
K.M. argues that if light speed is variable, a new coordinate transformation is necessary, incorporating Lorentz transformations as a special case.
J.v.B. highlights the mathematical complexity of the Lorentz transformation, questioning its physical interpretation.
K.M. asserts that Einstein’s assumption of constant light speed introduces paradoxes, which could be avoided with a different approach.
The Role of Fields in Physics
K.M. proposes that physical phenomena, including length contraction and gravitational effects, are influenced by fields rather than just motion. This perspective leads to a new understanding of how fields dictate physical measurements.
The concept of length contraction is tied to the Lorentz transformation, which K.M. connects to field strength.
K.M. emphasizes that the electric and magnetic fields influence the dimensions of objects, leading to observable effects like length contraction.
The relationship between field strength and length is expressed as a simple proportionality, contrasting with complex mathematical expressions.
Objectivity vs. Relativity in Physics
K.M. distinguishes between a subjective observer theory and an objective theory that seeks to understand physical reality beyond observation. This shift in perspective aims to provide a more accurate representation of physical phenomena.
K.M. criticizes the reliance on subjective observations in modern physics, advocating for an objective approach that considers what physically occurs.
The objectivity theory posits that the constancy of light speed is a mere measurement constant, not a fundamental property of nature.
J.v.B. acknowledges the challenges of reconciling subjective observations with objective reality.
Unifying Forces and Interactions
K.M. presents a unified theory of interactions, suggesting that all forces, including gravity and electromagnetism, can be explained through field interactions. This approach offers a new framework for understanding fundamental forces.
The theory posits that the perceived gravitational attraction between particles arises from their field interactions rather than a traditional force.
K.M. explains that electromagnetic interactions result from the behavior of open field lines, while closed field lines correspond to neutral particles.
The model suggests that gravitational effects are a consequence of the geometry of space influenced by these fields.
Implications for Energy and Matter
The discussion touches on the potential for energy generation from fields and the nature of matter at a fundamental level. K.M. suggests that understanding these principles could lead to new energy solutions.
K.M. theorizes that energy is a state description of electromagnetism, and the conservation of energy is a derived principle from field interactions.
The possibility of generating energy from the vacuum or neutrinos is mentioned, although practical applications remain theoretical.
The transformation approach allows for the derivation of physical laws, such as the conservation of energy, from the field theory perspective.
Railgun and Neutrinopower Concepts
The discussion revolves around the Railgun as a practical example of Neutrinopower, highlighting its unexpected energy output and the implications of such technology. The conversation emphasizes the potential for free energy generation and the challenges associated with harnessing it effectively.
The Railgun, known for its high energy output, reportedly produced 399 GJ from an input of only 16.7 MJ, indicating an Over-Unity effect of 24,000.
Engineers involved in the SDI project faced significant challenges, including structural failures during tests.
The Railgun operates using high voltage and rapid changes in current, similar to natural phenomena like lightning.
The concept of Neutrinopower suggests that Neutrinos can be materialized and harnessed for energy, drawing parallels to natural energy conversion processes.
Challenges in Harnessing Free Energy
The conversation highlights the difficulties inventors face when attempting to create stable free energy devices, particularly regarding control mechanisms and energy regulation.
Continuous operation of free energy devices can lead to catastrophic failures if not properly regulated.
Many inventors fail to consider the necessary control systems, leading to instability and potential destruction of their devices.
Historical examples, including Tesla’s experiences, illustrate the risks associated with free energy experimentation.
Neutrinos and Biological Implications
The dialogue explores the biological effects of Neutrinos on human cells and their potential link to aging and diseases like cancer.
Increased exposure to Neutrinos may damage mitochondria, leading to energy deficiencies in cells and potentially accelerating aging.
The discussion suggests that excessive Neutrino exposure could contribute to rapid cell division, possibly resulting in cancer.
The concept of spontaneous human combustion is linked to Neutrino accumulation, indicating a need for further research into its biological effects.
Tesla’s Innovations and Theoretical Applications
The conversation delves into Nikola Tesla’s contributions to energy transmission and his visionary ideas regarding wireless energy transfer.
Tesla’s work on the single-wire transmission system demonstrated a theoretical efficiency of 100% and eliminated energy losses.
His experiments with high-voltage systems and flat coils led to significant advancements in energy transmission technology.
Tesla’s vision for wireless energy distribution was ahead of its time, facing resistance from investors concerned about unregulated energy distribution.
Future of Energy Technologies
The discussion concludes with reflections on the future of energy technologies, particularly the potential of Neutrinopower and Tesla’s theories.
Neutrino-based energy systems are seen as a decentralized and highly efficient alternative to traditional energy sources.
Tesla’s single-wire and wireless energy transmission concepts remain relevant, with potential applications in modern energy systems.
The conversation emphasizes the need for further exploration and development of these innovative energy solutions to address current energy challenges.
Neutrinopower and Its Applications
Neutrinopower is a revolutionary concept that utilizes neutrinos for energy generation, challenging traditional electrical engineering principles. The discussion highlights the potential of new technologies and materials needed to harness this energy effectively.
Neutrinos can be attracted and their density increased through resonant interactions.
New components are required for isolator technology, replacing conventional electrical components.
Neutrinolyse, a process where neutrinos interact with water, can produce hydrogen and oxygen without consuming electrical energy.
Stanley Meyer developed a water-fuel cell that uses water as a fuel source, achieving a fuel consumption of 2.8 liters per 100 kilometers.
The Role of Water in Neutrinopower
Water plays a crucial role in enhancing the effectiveness of neutrino interactions, acting almost like a catalyst.
Water’s high dielectric constant allows for strong interactions with potential vortices.
The dipole nature of water molecules facilitates easy resonance with neutrinos.
Increased water content in batteries enhances their recharging capabilities, with lead-acid batteries being particularly popular.
Historical Context and Technological Challenges
The conversation touches on historical figures and the challenges faced by inventors in the field of free energy and neutrino technology.
Inventors like Walter Schauberger and Stanley Meyer faced significant obstacles, including suppression of their technologies.
The discussion reflects on the potential dangers and risks associated with pioneering new energy technologies.
Historical events, such as the observation of supernovae, are linked to changes in neutrino radiation and its effects on Earth.
Neutrinos and Cosmic Phenomena
Neutrinos are linked to cosmic events, such as supernovae, which can significantly impact the Earth and its environment.
Supernovae release vast amounts of neutrinos, which can affect solar activity and geological events on Earth.
Historical supernovae may have influenced human history and biological development due to changes in radiation levels.
The potential for increased neutrino radiation could lead to geological disturbances, including earthquakes and volcanic eruptions.
Theoretical Implications of Neutrinos
Theoretical discussions suggest that neutrinos could play a role in understanding fundamental forces in the universe, including gravity and electromagnetic interactions.
Neutrinos may provide insights into the structure and behavior of galaxies, challenging existing astrophysical models.
The concept of resonant interactions could explain phenomena that current physics struggles to address.
The discussion proposes that the universe operates on a cycle of energy exchange, with neutrinos being central to this process.
Future of Neutrinopower Technology
The future of energy generation may heavily rely on the utilization of neutrinos, with significant implications for technology and society.
There is optimism that advancements in neutrino technology could lead to cleaner and more efficient energy sources.
The timeline for widespread adoption remains uncertain, influenced by technological developments and societal acceptance.
The potential for a shift in energy paradigms could reshape industries and environmental practices globally.
Literature Cited in the Context
The text provides a comprehensive list of literature related to electromagnetic compatibility and scalar wave technology, primarily authored by K. Meyl and other notable figures. This literature serves as foundational references for understanding the principles discussed in the context.
Key works by K. Meyl include three parts on electromagnetic compatibility, with English translations titled “Scalar Waves.”
Other notable references include works by Nikola Tesla, Johannes von Buttlar, and various scientific publications on electromagnetism and energy.
The literature spans various topics, including free energy, electromagnetic fields, and theoretical physics.
Recommended Literature for Further Study
The text recommends specific books and resources for readers interested in the subject of neutrino power and scalar wave technology. These resources are essential for a deeper understanding of the concepts presented.
The three main books by K. Meyl are essential:
Part 1: Causes, phenomena, and scientific consequences (ISBN 3-9802 542-8-3, 16 EUR).
Part 2: Free energy and neutrino interaction (ISBN 3-9802 542-9-1, 16 EUR).
Part 3: Information technology and scalar waves (ISBN 3-9802 542-7-5, 16 EUR).
Additional documentation and videos are available for purchase, enhancing the learning experience.
Experiments on Scalar Wave Transmission
The text outlines various experiments related to scalar wave transmission, emphasizing their unique properties and potential applications. These experiments challenge conventional physics and demonstrate extraordinary phenomena.
Experiments include wireless energy transmission and feedback from the receiver to the sender.
Claims of free energy generation with approximately 10 times over-unity efficiency are presented.
Scalar wave transmission is suggested to occur at about 1.5 times the speed of light, alongside observations of tunneling effects.
Available Experimentation Sets for Learning
The text describes two types of experimentation sets available for purchase, aimed at different audiences interested in exploring scalar wave technology. These sets facilitate hands-on learning and experimentation.
The Demonstration Set is priced at 800 EUR and is designed for non-experts, allowing five experiments without additional tools.
The Experimentation Set costs 1400 EUR and includes advanced equipment for physicists and engineers, featuring three different coil sets and a frequency counter.
Stefan Lanka rejects the ideas that matter is made from a collection of atoms and that biological tissue is made from cells, preferring to regard living systems as composed of a ‘primordial substance’ sometimes referred to as ‘ether’ and at other times as ‘Pi water’, from which all other materials are derived.
This post looks at some of his comments from the perspective of vortex physics and assumes a distributed electromagnetic bio-field that organises all biological systems. See: The nature of the bio-field
Nothing has caused as much damage to humanity, both spiritually and physically, as the atomic theory. Einstein advocated the application of knowledge about the ether, the primary substance of life .
Stefan has a point. The Bohr model of the atom that we are all familiar with is one of solid marble-like particles that orbit a nucleus and possess various ‘properties’ such as mass and charge. Atoms are claimed to be practically unbreakable outside of a nuclear reactor or the centre of a star and are thought to be the fundamental building blocks of all material objects.
This encourages a view of living cells that sees them as constructed of atoms, the same way that a house is constructed of bricks, that is to say piecemeal, one brick (atom or molecule) at a time and according to a design or template (blueprint).
A ‘digital’ view of biology is developed which is at odds with reality but readily accepted because the prevailing theory from physics has been thoroughly absorbed (being easy to understand) at the roots of our intuition and leads to a deep rooted bias in all scientific thought.
Never proven
In another post, Lanka claims that the Atomic Theory has never been proven.
If this seems outrageous, simply reflect that almost all contemporary physicists now advocate for a quantum model of the atom whereby all matter is a manifestation of a ‘quantum field’, a continuum of probabilities. The reason they have adopted this model is because of various phenomena that are simply not explainable via the Bohr model, for example the famous Double Slit Experiment.
The two models are at odds with each other and cannot both be true at the same time.
Stefan is therefore accurate in this respect.
The ‘ether’
Stefan uses the term ‘ether’ to denote the fundamental substance of the Universe. This is an unfortunate choice of words as it refers to an earlier formulation of physics whereby the whole of the material universe sat inside the etheric substance which provided an external framework, a reference point to define time and distance.
The existence of an ether as separate from material reality has itself never been proven and adds an unnecessary dualism to reality a division between two types of ‘substrate’. Far better to envisage the whole universe as consisting of a single ‘substance’ conforming to a single set of laws which is what I think Stefan is trying to say.
Pi water and elemental transmutation
Stefan has also referred to something called Pi water as the fundamental substance of Life and claimed (after Dr. Peter Augustin) that all substances emerge from this substrate.
Again, a seemingly outrageous claim if we accept the Bohr model of the atom where matter is conserved, never destroyed and never created outside of a Big Bang.
However, the experiments of Louis Kervran and others give very strong empirical evidence that elements can be transmuted from one to another within biological systems and even that matter can be created and destroyed in synchrony with lunar cycles. See: Transmutation
Somehow, electromagnetic vortices in intracellular water accumulate enough energy to change an atom from one element to another. Konstantin Meyl has theorised that additional absorption of solar neutrinos can accumulate sufficient energy to actually create electrons within living cells.
The fundamental substance of the Universe though is not ether or Pi water but an electromagnetic field from which water itself is an emergent substance. Transmutation is achieved, not from the water, but via energy accumulated and transduced by the bio-field itself.
Spirit identified as the bio-field
This substance integrates spirit because it is the building, energy and information substance of life. In academic biology and medicine, the assumption of spirit is excluded.
An electromagnetic bio-field permeates all living systems and what appears to be ‘matter’ is really an illusion created by highly stable vortex structures within the field. It does not need to be integrated as there is nothing to integrate into; all is a unity and all that exists is the field.
This field fulfils all the criteria of what Lanka terms ‘spirit’ and is indeed largely dismissed as a source of either information or energy by academic biology, being relegated to the status of a mere power source or waste disposal unit.
In fact the bio-field (spirit) is the progenitor in all biological activity from metabolic regulation to consciousness, it is the primal source of all energy and organisation.
Our organs are organized in interconnected tissues (w+ 1/2/3-2019) and not in cells. The cell theory has never been proven, always refuted and derived from the atomic theory.
The diagram below shows the vortex structure at the surface of the sun. A living being is much smaller but the laws of electrodynamics are the same and so we may suppose that a similar arrangement is present in the bio-field of the body.
An overall toroidal electromagnetic field fragments into smaller vortices which self-organise into an energetic cellular structure. Matter accumulates at ‘hot’ points and tissue is formed in a regular pattern resembling cells.
Many researchers (e.g. Robert Becker) describe electric fields in living systems and others (Nick Lane) describe circular electric ‘currents’ resembling vortex structures. Many others describe a sharp electric gradient at cell boundaries.
So whether or not a ‘cell’ exists as described, the bio-field itself necessarily has a cellular structure arising from its vortex nature. This structure is reflected in the material substance of the tissue and leads to the impression of separate physical cells.
Lanka has stated that the nucleus of a cell is ‘free to move’ within the tissue. However, the nuclei will tend to adopt a certain spatial ordering whilst rotating slowly. This is entirely consistent with the existence of an energetic vortex structure with the nucleus at the centre and which maintains separation and rotation of such nuclei.
Living tissue has an electrically cellular structure.
Tissue repair
There is a claim (possibly from Stefan) that if a finger is cut or an apple is broken then immediately some sort of bi-layer is created and that this has been interpreted as a cell membrane.
This is very credible given the existence of a morphogenic vortex field.
Any discontinuity in tissue entails a potential discontinuity in the supervening vortex structure. However the vortex is tied to the laws of physics and will persist in some form or another; the rotational energy must complete its circuit somehow.
A cut or break then introduces an altered energy structure at the new surface and an altered energy structure means modified biological activity. New tissue is assembled almost instantly according to the laws of electromagnetism acting directly upon existing tissue. A new membrane has formed and a healing process has begun.
Exosomes
Within the cell theory (refuted), the disintegration of isolated tissue lumps, which are interpreted as cells, but Human/animal excretions containing connective tissue are also interpreted as “exosomes”.
What is meant by ‘disintegration’ in biological systems?
To answer this we need to understand what it is that holds together the tissues in the first place.
Each cell is a vortex structure with a negative electrical field moving around the periphery then there necessarily exists a magnetic dipole with North-South polarity along the axis of electrical rotation. It is this arrangement that holds the cells together, with the magnetic forces pulling the tissue together and the electrical forces maintaining separation.
All energy fields are to some extent ‘lossy’ and so eventually the electromagnetic forces will tend to weaken and the tissue will literally fall apart.
The laws of physics still hold, however, and so new configurations of matter are adopted and still according to some vortex law. We should expect, therefore to see new ‘cellular’ shapes begin to appear with even tiny vortex satellites surrounding them.
There are claims that these exosomes are somehow helping the organism survive by transporting energy and other resources from one place to another. Possibly, but whatever the function, they are created from a deteriorating bio-field and will behave according to such the laws of such an environment.
Lack of energy has caused tissue to disintegrate and the debris has adopted a new ‘least-energy’ state according to its new environment. Circular membranes are therefore in abundance, with what look like new cells appearing solely from the properties of membranous tissue imbued with electromagnetic vortex energy.
These artefacts are separated from their bio-field and are merely adopting new forms as dictated by the laws of physics. There is not necessarily any biological ‘meaning’ in any of these shapes.
Intracellular water
Gilbert N. Ling – the fluid in the “cells” is not water
In the interior of a cell we have a substance that is denser, more viscose than water with a gel-like consistency and somehow organised, energetic and ‘alive’.
Many researchers have tried to describe this substance as: ‘in an excited kinetic state’, ‘quantum coherent’, ‘fourth phase water’ or ‘full of de-localised electrons’ to choose just a few examples.
The properties of such a gel seem at odds with a classical description of water and nobody has been able to explain them in terms of molecular structure.
It would seem that Ling is somewhat justified in claiming it to be other than water and Lanka correct to reject conventional atomic theory as foundation for describing intracellular gel.
The physics of Konstantin Meyl, however, gives a richer model for molecular structures that seems more in tune with the needs of biology as a whole. Electrons in this model are stable electromagnetic field vortices which have spin in the electric domain and therefore form a magnetic dipole. See: The atom
The properties of intracellular water are likely explained by the forces arising from these fields. Magnetic forces pull the molecules together and lead to some sort of organisation and alignment whilst electric forces maintain separation between molecules.
The cell is kept in a state of high energy by the body and this leads to close packing of molecules which in turn creates higher density. Viscosity arises from ‘field drag’ and stronger fields will lead to higher viscosity to the point where the consistency turns to a gel.
No elemental transmutation is needed here, just a higher level of ‘free energy’ organised as a nested vortex structure.
Vortex alignment
The cover of the book: The Rainbow and the Worm by Mae-Wan Ho, shows various living organisms photographed with polarised light. Macro sized areas are transmitting a single wavelength of light which means that the constituent molecules are forming some sort of filter.
Ho interprets this as meaning that all the atoms are aligned in the same direction and for her this means some sort of quantum ‘coherence’.
An alternative explanation might be simply that the magnetic forces arising from the vortex structure in living bio-fields has brought all the intracellular water molecules into magnetic alignment.
No relationship between microscopy images and in vivo structures
A motionless electron microscopy image never reveals a living biological process. What is observed under electron microscopy has absolutely nothing to do with what happens in the human biological organism. Any result from the laboratory can provide absolutely no insight into the processes within a living organism.
Activities and morphologies in both living systems and microscopy environments both obey the same laws of physics but are subject to different bio-field organisation.
When transferred from a living system to a microscope slide, molecular collectives (‘organelles’) will break down and reassemble almost instantaneously in accordance with powerful magnetic forces and the general ‘cellular’ appearance of the ensuing shapes will give the impression of some sort of meaningful biological structures. This is an illusion.
Hypothesis: Living systems are controlled by an electromagnetic bio-field that is responsible for all biological organisation of information, energy and matter. This field takes on the form of energetic vortices which flow through the various conduits provided by the host organism.
The idea can be extended to all natural systems throughout the cosmos and the mechanism can be seen at work in the formation of stars and galaxies, the properties of water, the patterns of weather systems on Earth, the influence of such systems on biological rhythms and even in the induction of disease.
A bio-field regulates at the level of a whole organism and directs energy in a nested vortex system inwards to the organs and thence to the heart of every cell in the body. Even within a cell, energy is again driven inwards towards the nucleus and local vortices are formed around the hexagonal rings of bio-molecules where they act as energy accumulators and transponders at the molecular level.
Vortices form in the insulating myelin sheath around nerve fibres, enabling efficient transmission of arbitrarily large quantities of information at close to the speed of light with minimal loss or corruption. The brain is a series of nested electromagnetic vortices.
A bio-field complex is responsible for the inheritance of phenotype and even of acquired characteristics. Such vortex fields can absorb energy from external sources (heat, Gibbs energy, solar neutrinos, atmospheric discharge) and no doubt were instrumental in the formation of the first living systems.
An energy vortex will ‘want’ to travel and will find a path of least resistance whether it be in space, bio-systems or electrical circuitry. If energy is produced by a chemical reaction, for example in a simple battery, and then presented with an insulated wire, then the conditions are right for the production of an electric current and energy will move from one place to another as directed by the conductivity of the local environment.
Concentration vs dissipation
The idea of an energetic vortex flow together with an ‘accumulation principle’ is in stark contrast to the default world view of essentially dissipative processes which somehow accumulate sufficient energy, information and stability to first create, maintain and then reproduce, a biological organism.
It is worth comparing the two frameworks in general terms and asking which is more propitious for the formation and continuance of ‘life’.
Radial dissipation, big bangs and randomness
We are told that the world began with a Big Bang and that on average all matter is expanding outwards, all the time losing ‘order’, increasing in entropy (disorder) and heading towards an inevitable ‘heat death’.
The main process opposing this is that of gravity which is a simple centripetal force, drawing everything towards a central point. This may be instrumental in the formation of simple spheres in the form of stars and planets this is clearly not sufficient to produce a living organism.
Chemical reactions occur, sometimes driven by ‘heat energy’ but still on an energetically and informationally ‘downward’ slope.
Two molecules or atoms encounter each other by chance and maybe stick together if they happen to have enough energy to do so, but a random coupling is surely not conducive to the construction of a living being. Where did the energy come from to achieve the coupling? Did it accumulate by ‘chance’ again?
In the case where a reaction releases energy, that energy is either radiated outwards as photons or dispersed outwards in the form of ‘heat’. Both processes are dissipative, thermodynamically ‘downhill’ and anathema to the creation and maintenance of an organism that is often said to be ‘far from thermodynamic equilibrium‘.
Somehow within this environment, ‘life’ began; but how?
We are asked to believe that in an environment of random vibrations of molecules and the radial emission of photons at the speed of light, that somehow life emerges; somehow global ‘organisation’ arises from random events with no informational template and no fundamental organisational principle; somehow energy accumulates as a result of processes whose main tendencies are to radiate and dissipate.
Vortex concentration
Consider, in contrast to the above, a default world view where energy has a propensity, not to radiate but instead to form vortex structures where there is a tendency to spiral inwards and to concentrate at some ‘vortex radius’, a small spherical volume of high intensity energy which can be utilised for chemical reactions and other biological necessities.
We immediately have accumulation, instead of dissipation as a fundamental property of the universe, a basis upon which other processes can be built.
One function of the vortex is to serve as an energy accumulator, absorbing energy up to a critical threshold before releasing it in a pulsatile fashion. Another is to assemble molecules, to draw them together and even align them in preparation for an ensuing enzyme reaction fuelled by the energy from the vortex itself.
In addition, a vortex structure will create a field gradient from centre to periphery, providing a variety of environments within which bio-chemical reactions can occur.
As energy spirals inwards, further modulations of the field take place according to local conditions with further concomitant refinements of structure according to the laws of physics. Both energy and information (scalar wave structure) are continually harvested from the electromagnetic environment and are interpreted, sequestered, released and utilised in a way which is determined by existing physical structures.
We already have a system that satisfies a broad definition of ‘life’
The ring vortex
The field is electromagnetic in nature and obeys a set of differential equations formulated by Konstantin Meyl which are really just a tidied up version of the familiar Maxwell-Heaviside equations.
Electric and magnetic components of the field are in a continual state of movement (no static fields) and those movements always at right angles to each other in accordance with the observed laws of Fleming and Faraday.
Given these constraints, the field has a strong tendency to form stable vortex-like structures of various configurations.
Shown here is a ring vortex with electrical field movement shown in pink and an associated magnetic field in yellow. The magnetic field forms a de facto north-south dipole and the electrical component will allow for self-propulsion of the structure under propitious field conditions.
Other patterns such as helical formations are feasible but the ring structure shown is sufficient to explain many observed phenomena.
The magnetic dipole structure is made explicit in this diagram and occurs in a variety of situations. The electric field shown in green provides ‘electrostatic’ repulsion, keeping elements apart from each other, whilst the magnetic dipole in pink helps to attract, organise and align such structures.
This short video shows the development of the nervous system of a zebra fish. A ring vortex accumulates energy from the environment and this is used to either assemble existing matter, or to create it from scratch via biological transmutation before organising it into nervous tissue.
Ring vortices can almost be seen at the developing tip of each nerve. Ask where the energy comes from to sustain this activity and how the development s directed.
The vortex structure sucks in energy from heat, kinetic motion, Gibbs energy and possibly from the solar neutrino stream, all to be concentrated at the ring itself for developmental purposes. Orientation is achieved by the influence of an ambient magnetic field acting upon the dipole structure of the vortex itself; the rings are guided along the correct path by a ‘morphogenic’ field.
Once complete, the neurons will serve as conduits for similar vortices carrying both energy and information around the organism with a high degree of efficiency. The myelin sheath, being an electrical insulator is ideal for the formation of the magnetic component of the vortex and indeed it has been found that the speed of propagation increases precisely when this sheath is thicker. See: Scalar waves and nerves
Cellular organisation
Electromagnetic vortex fields will have a tendency to self-organise into a variety of structures, one of which is a tightly packed cellular structure with an assortment of associated magnetic and electric vortices.
The image below shows an arrangement of such structures found at the surface of the sun. Now clearly biological organisms are much smaller than the sun but the laws of electromagnetism do not make exceptions for scale and are in force at every point in the universe.
We can consider therefore that the cellular structure of a biological system is organised in the first instance by electromagnetic fields and thereafter maintained by the same fields which have been fixed in place by the production of physical matter as with the ring vortices and nerves.
Development and function seem inevitably linked by processes such as this. An early vortex forms an ‘ideal’ shape according to the laws of electromagnetism and then physical matter develops from the vortex energy. The form is then somewhat modulated by the laws of material physics such as fluid pressure and membrane tension etc. to assume a slightly different shape that will sit comfortably within the existing cellular ensemble. Thereafter the vortex field itself is guided by the physical body and performs the duties of energy transfer, information transmission and morphological maintenance.
The origins of Life
There is some evidence to suggest that conditions on early Earth were considerably more electromagnetically active than today and so we can imagine the existence of vortex patterns similar to those of the solar surface (pictured above).
Somewhere in the primordial soup, then, electromagnetic vortices form and stabilise into a cellular ensemble which maintains a constant throughput of energy which may last several millennia. Conditions are stable and varied enough to host the beginnings of pre-biotic ‘life’. Energy is accumulated, matter is concentrated and the first bio-molecules form under this environment.
A common idea is that biological cells are ‘irreducibly complex’ and that a cell is the sum of all the constituent bio-molecules whose creation must precede the creation of the cell. This is a crippling thought and at odds with what is observed.
Bio-molecules in daily life are a product of the cell and not the other way around; the cellular organisation precedes the production of bio-molecules.
Complex molecules emerge from the cell rather than the cell emerging from them. In the vortex scheme described, the cellular structure arises first as a consequence of the laws of physics independently of any physical matter and is followed by the creation of such matter from the intense energies and specific electromagnetic structures present.
Moreover, bio-molecular activity is mediated, not by the molecules themselves, but by the attendant electromagnetic field which gave rise to them in the first place. Development and function are again linked, with the physical form being a concretisation of the original bio-field.
Energy transport
Energy is transported around the organism by a variety of means:
Ring vortices – carry energy from one place to another
Vortex transfer – energy can be transferred from one vortex to another
Heat transfer – this is a form of vortex transfer
Gibbs energy (free energy) – assumed to be thermodynamic in nature but more likely to be organised vortex transfer
Electro-acoustic vibrations – another manifestation of vortex transfer
To get a sense of the behaviour of ring vortices, watch videos of water vortices, smoke rings or plasma rings. Energy is packed into a small volume and moves from one point to another with losses kept to a minimum. The amount of energy transported being somewhat independent of the size of vortex or speed of movement.
Ring vortices can merge together or bifurcate. They will appear wherever the conductive environment is suitable. They can transport energy along existing conduits such as nerves and will create temporary conduits (e.g. microtubules) where necessary, leaving them to be dismantled after use.
Gibbs energy
Gibbs energy or ‘free’ energy is assumed to be thermodynamic and hence dissipative in nature, but at the same time responsible for all manner of reactions which surely require precise accumulation of energy at specific points in the cell.
A better way to think about free energy then is to imagine an environment dominated by a complex vortex structure similar to the solar surface (Fig. 1) where energy is free to move between the vortices in a manner similar to that of a flowing river.
The energy is ‘free’ but organised, it will flow with the vortex structure and will tend to attain some state of dynamic equilibrium. A deficit of energy in one place will soon be remedied as energy flows in from somewhere else but the overall vortex structure will be maintained. Half of the work of energy regulation within living systems is already accomplished at the fundamental base level of physical reality.
The laws of electrodynamics, as opposed to the fantasy of thermodynamics, will prevail and there is an organisation and accumulation of energy as opposed to dissipation and disorder.
ADP/ATP
Prof. Konstantin Meyl presents a good argument to the effect that the rotation of the phosphate groups in ADP is powered by electrical vortex energy. The vortices are present in inhaled oxygen, enter the bloodstream via the lungs and energise the ADP therein.
The ADP travels through the arteries to the capillaries and into the lungs where it is released to provide energy for the mitochondria. There is no need for any gaseous transfer to take place across the lining of the lungs. See: Do we breathe oxygen?
Hexagonal ringmolecules
Again from Konstantin Meyl, comes the idea that the hexagonal structures found on many bio-molecules (chlorophyll) can act as field-energy accumulators. Vortex energy spirals around and is captured by the ring structure to form a strong ring vortex that moves with the molecule.
The vortex will have both electrical and magnetic components, allowing for a variety of possible behaviours.
For bio-chemistry to function as observed, we require some sort of mechanisms to assemble and align molecules, to accumulate energy and to release it as required for reactions to take place.
According to Meyl, there is not sufficient energy in an ultraviolet photon to do what is claimed but what happens instead is that energy accumulates around the ‘head’ of the chlorophyll molecule until some threshold is reached, whereupon it is released and travels to where it is needed. The transport mechanism is so efficient that physicists have assumed some sort of quantum-superconductivity to explain it but it seems that energy transport via ring vortices might be sufficient.
Enzyme reactions
Enzyme reactions are puzzling. Sometimes they react and sometimes they don’t. Reactions in a laboratory are different from reactions in vivo. The application of heat seems to speed up reactions. Sometimes acidity affects the reaction rate.
Hypothesised mechanisms include receptors, binding sites and catalysts but no description of their mechanisms is given in terms of any sort of fundamental laws; what is a receptor made of for example?
For two molecules to bind together some ‘long range’ attractive force is necessary to draw them near to each other along with sufficient energy to overcome some sort of barrier of ‘potential’.
Mainstream theory of kinetic gases has molecules bumping into each other to supply the movement and energy but this doesn’t explain all the effects seen. Van der Waals forces arise from the fixed properties of the atomic structure and should not be varying with the environment.
Hypothesis: Bio-molecules contain hexagonal ring structures which promote the construction of electromagnetic ring vortices. These act both as energy accumulators and magnetic dipoles and add an extra layer of complexity to molecular interaction.
Magnetic forces exert long range attraction, pulling molecules together and orienting them correctly. The potential barrier is overcome and the whole arrangement settles to a new, stable, low-energy state. There is likely some surplus energy now and this simply diffuses away into the general vortex matrix as ‘heat’ or maybe transduces to infrared light.
The application of heat to the system is a way of adding energy to these ring vortices and will speed up reactions in general. A catalyst is a way of introducing both extra energy and additional organisational forces into the reaction. Energy accumulates on the catalyst and is used for the reaction but the molecule stays intact. The catalyst is not physically destroyed but is now a bit low on energy. It will, however, continue to accumulate energy in order to to re-fuel for the next interaction.
Rates of enzyme and other reactions seem to vary considerably with season, lunar cycles and eclipses as recorded by Simon Shnoll and Giorgio Piccardi. These are a hitherto unexplained phenomena.
Energy accumulation is in part from vortex transfer (heat), in part from infra-red absorption and in part from the solar neutrino stream. Neutrino density increases by a huge factor during eclipses and so the effects seen by Shnoll and Piccardi are now to be commonly expected. Stirring a solution is merely a way of adding extra vortex energy by kinetic means.
If the body or cell can control energy input to the reaction then the speed and possibly the ‘nature’ of the reaction can be controlled on a highly localised basis.
This scheme adds an extra layer of complexity to the Van der Waals forces that is actually independent of such forces, decoupled from the atomic structure and whose strength varies over time according to both ambient conditions and cosmic cycles. The addition of magnetic dipoles seems to be an adequate explanation for the mechanism of the various receptors, inhibitors etc.
Protein construction
Proteins are complex molecules with well defined functions in biological systems. Construction is said to be via gene expression and once constructed the completed molecule needs to be folded precisely or else the whole chain is ‘dismantled’ and the whole process starts over. Initial creation is said to be impossible by ‘chance’ thereby giving encouragement to the intelligent design lobby. Some proteins only have a lifespan of about 10 minutes before, again, they are ‘dismantled’.
So many unanswered questions here.
Assume that a protein starts off as some sort of ‘seed’ whether it be a physical molecule or an electrical eddy current (field vortex). Energy spirals inwards from the ambient electric field and adds to the vortex. Amino acids are sucked in or created on the spot from vortex energy. The whole molecule is assembled via the laws of physics, the precise nature of the vortex and the specific mix of ingredients in the local environment.
The completed molecule folds according to a least energy pathway and a complex field vortex forms at the centre. This vortex continues to accumulate energy and acts as a power source for various cellular processes.
The basic function of any bio-molecule is to transduce energy from the ambient vortex field into something that can be used by the other molecules. Energy is absorbed, accumulated, transduced and dissipated.
An incorrectly folded protein may absorb an indefinite amount of energy without sufficient dissipation and will therefore self-destruct. Proteins with short lifespans similarly do not need to be destroyed by the cell itself but will disintegrate when overloaded with energy. If these molecules are to be dismantled by external means then surely some sort of timer is required, meaning an additional complication, an additional mechanism to be explained.
The same may be true of some toxins; they simply continue to accumulate energy until the molecules or even atoms break down completely.
Bio-molecular evolution is hastened by self-selection, meaning that unsuitable molecular chains will self-destruct on the spot and any cellular environment that does not promote and appropriate energy flux will not survive to reproduce anything. There is no need for billions of years of randomness and selection and no process is truly random but always according to the laws of physics and within an environment of a continual flux of vortex energy.
A correctly formed protein will be able to dissipate energy at the same rate at which it is absorbed and it is up to the rest of the cell to make use of this energy in whatever form it is presented. Pulsed energy may be used in enzyme reactions. Enclosing vortex fields may be used for transport of other resources or assistance in maintaining ion gradients. Completed proteins may accumulate further energy and emit more complex structures to be interpreted as ‘information’.
Properties unexplained by molecular structure
An AI engine gives a list of phenomena that are not fully explained by the ordering of atoms within the molecule. They require something else, an electric field of some sort:
Protein folding
Enzyme activity
DNA replication
Delocalisation of electrons
Electrical conductivity
Light absorption
Binding of a drug to a receptor
Recognition of a substrate by an enzyme
Other molecule-specific interactions
Biological transmutation
Louis Kervran (right) performed many experiments showing that the mineral output of many living organisms did not match the input, leading to the inescapable conclusion that living beings are somehow able to transmute elements from one to another according to their own needs.
Chickens raised on land containing no calcium were able to grow, maintain a skeleton and lay eggs with hard shells. The chicks hatching from such eggs contained more calcium than was in the egg in the first place and suffered no health issues. Calcium has been manufactured from some other element.
Manual workers in the Sahara sweated out more potassium than they consumed but the amount was consistent with the volume of sodium ingested, thereby suggesting that they had transmuted elemental sodium into potassium. Energy was sequestered in the new molecule and excreted from the body thereby providing an additional cooling mechanism. Restricting sodium input led to heatstroke.
Whatever the details of the transmutation of elements, such a process is going to need considerable energy and, moreover, that energy must be carefully controlled and localised if it it not going to destroy a whole chicken.
The idea of an electromagnetic vortex fits the requirement (Meyl). Energy accumulates and localises at the centre of the vortex. This energy becomes highly concentrated at a small scale and when individual ions are drawn in to the whirlpool they become destabilised according to the high field strength, thus allowing the splitting apart or joining together of elements at the atomic level.
Blood flow
The book “The Heart and Circulation” by Branko Furst (right) summarises over 100 years of research into the nature of blood flow and concludes that the idea of a heart as a pressure pump is inconsistent with reality. The blood is not pushed around by the heart but instead moves with its own motivational force and according to the metabolic needs of the body.
Nobody has worked out how this happens or where the energy comes from so it is time to go back to the basics of physics and consider how the electromagnetic forces (there are no other) within the blood can somehow be utilised to provide sufficient kinetic energy to maintain a decent flow.
In a paper from Alexander Morozov, ATP and other biological substances were added to water and the solution placed into square channels of various dimensions. The water was seen to self-organise first into a collection of vortices as shown and second into a self-sustaining directional flow along the tube.
Now self-organisation is by the laws of electromagnetism, but there is still the need for a regular supply of energy. Suggestions for sources include:
Popular images show a toroidal electromagnetic field surrounding the body which is measurable for a distance of about five feet away from the body and is assumed to be created by, and emanate from, the heart and other energy centres such as the brain, liver etc. This is hard to verify but sounds ‘likely’.
An electromagnetic field is claimed to be produced by the action of the heart and makes its way largely unscathed through the highly charged mass of muscle and bone to somehow form a torus around the body. The field is so strong as to be measurable several feet away from the body and to be able to affect the heart rate of other people within the proximity.
The heart is already at a temperature close to which its proteins will denature but can cope with manual labour in Sahara heat without cooking and still generate enough spare energy to create such a field.
Alternative hypothesis: The observed external biofield is the organisation of already existing external energy which may radiate or may even spiral inwards towards the body. Energy moves inwards but information moves outwards. The internal bio-field is organised as a general toroidal vortex at all scales. Each cell hosts an electromagnetic vortex and generates its own electric field. Energy moves between the cells in the general pattern of a torus.
Energy can spiral outwards to release excess or can spiral inwards towards various vortex centres (Chakras) as a de facto power supply. Increased muscular exertion increases the energy production, increases energy supply towards the heart and also increases vortex transfer outwards as heat loss.
A field is measured outside of the body and is assumed to be radiating outwards from the heart in accordance with traditional beliefs regarding such fields, but the principles of vortex physics allow for different interpretations.
We live between the twin capacitor plates of the Earth’s surface and our ionosphere and as such are surrounded by a continuous stream of electrical discharge in the form of field vortices. These vortices have a tendency to self-organise into larger (or smaller) vortices and will respond to the presence of a human body the way a river might respond to a small pebble or a frond of weed.
The field surrounding the body may therefore be explained, not by the radiation of a generated bio-field but by the organisation of an existing field according to the presence of the body. This field may be ‘static’ but attached to the body or may actually spiral inwards towards the body, thereby providing an additional energy supply.
Once energy has entered the body it is subject to the highly organised conditions within the body but the general laws of physics still apply. We can envisage the energy flow within the body as comprising a general vortex pattern which moves inwards towards the ‘chakras’ whilst self-organising into a cellular structure within the tissues. Each cell maintains and is maintained by, its own vortex, with the nucleus at the centre. Within this structure forms smaller and smaller energy vortices right down to the scale of an electron, itself an electromagnetic vortex (Meyl – Scalar Waves..).
The heartbeat can be detected in the modulations of the external bio-field, giving the impression that the energy is being emitted from the heart but this is not necessarily the case. It is quite possible for the energy to be actually spiralling inwards towards the body whilst information ripples outwards, using the field itself as a ‘carrier’.
Watch a stable vortex in a stream. The water spirals inwards but toss a pebble near the centre and ripples (information) will still travel outwards, against the vortex flow.
Whatever the requirements of a biological field, it must nevertheless contend with the basic laws of electrodynamics and these necessitate dynamic electromagnetic vortex structures. Energy supply and regulation has vortex movement as its fundamental basis.
The vortex principle
The diagram below comes from the paper: “About vortex physics and vortex losses” from Konstantin Meyl and illustrates the structure of a typical vortex.
Think about a tornado in air or a whirlpool in water. Water spirals inwards to reach a maximum velocity at the vortex radius (shown here as a circle). This radius is clearly visible in the case of a tornado.
Outside the radius, the speed and energy diminish according to some approximate inverse square law shown here as a curve dependent upon ‘R’ (radius).
Konstantin Meyl: About Vortex Physics
Inside the vortex, the energy gradient is linear and again dependent upon radius. Water or air will spin and will want to spiral outwards according to centrifugal force but will be prevented from doing so by the inward spiralling matter.
When the centrifugal force is balanced precisely by the centripetal force a stable dynamic structure forms and is visible as the vortex radius. The velocity at the centre of the vortex is always precisely zero; there is no theoretical possibility here of an infinite singularity such as a big bang or black hole.
If energy could be extracted from the centre of the vortex then that would merely allow for more energy to enter from the outside and presumably the converse would be the case; additional energy would dissipate outwards and again a norm is restored. Strength and stability at the centre are maintained by means of the accretion and dissipation of an effectively inexhaustible energy supply made available to the system by means of centripetal accumulation.
We have an example then of what might be termed ‘order from chaos’. A geometric structure with a self-regulating energy system has been created purely from the laws of physics with no need for any other informational input. The structure is stable to perturbations and yet at the same time mutable and adaptable to environmental forces. This is a contradiction of the general ideas of ‘entropy’ put forth by mainstream science.
A well defined shape with a tendency to accumulate and stabilise energy into a functional gradient is used as the basis for larger self-organising forms i.e. ‘Life’.
The basic vortex above is given by Meyl but more complex structures are known to cosmologists in the form of Birkeland Currents which show multiple concentric layers with alternating clockwise and anti-clockwise flows. [D. Scott]
Scalar waves
Electromagnetic fields can take various forms. Of relevance to biological systems are the magnetic scalar waves as described by Konstantin Meyl and below.
First a reminder of the structure of a ring vortex. In the diagram below an electric field in pink circumnavigates the axis whilst a magnetic field in yellow forms a magnet-like structure with a North-South dipole pointing up and down.
The magnetic field movement here is greater than the electric and so this formation is favoured whenever the magnetic conductivity is more than the electric i.e. in electric insulators.
In the top diagram below, several such structures have aligned along the magnetic dipole field, have self-organised into an even spacing and have merged somewhat to form a longitudinal wave: a scalar wave.
The lower diagram shows how this wave may propagate inside a co-axial cable, a wire with insulating sheath or a nerve with myelin sheath. The ring propagates in the less conductive sheath surrounding the central core.
Konstantin Meyl: Scalar waves: A first Tesla physics textbook
Both energy and information are transmitted by this means, energy by the ‘potential’ of a scalar wave and information by some unknown modulation of its structure. The ring itself represents a potential difference that can be used as energy at the destination.
The regular spacing of the vortices create a de facto ‘frequency’ and the nodes of Ranvier separating the neural axons control the transmission of impulses to create an electromagnetic standing wave akin to a vibrating guitar string.
Transmission of information is now by modulation of a ‘static’ electromagnetic field structure. There is no need for a moving wave structure as with photons and no need for a stream of moving electrons as is assumed for electric currents. A carrier wave has been established but does not travel and transmission of information is not by frequency modulation.
A messaging system has been established where field movement is minimal, resistance is negligible and energy loss is almost zero; we have a kind of biological-informational super-conductivity.
Morphogenesis
The central problem of morphogenesis is how an organism attains its final form merely from the actions of molecules. This is a conundrum and remains so even if we add in all the remaining known laws of physics. Elements are attracted directly towards each other or repelled away from each other, energy is dissipated and entropy increases, but there is no sense of ‘form’, ‘construction’ or ‘stability’ apart from the basic arrangements of atoms and molecules.
The addition of the concept of a vortex makes a huge difference; we now have a basic shape in the form of a sphere or helix, we have an inward and regulated movement of matter and energy and the existence of constructive forces at the molecular level.
A general vortex field will self-assemble into a cellular collective and communication between cells causes the emergence of a local bio-field that can be further organised to create a final form. See: Bio-field emergence
The heart: Helical streams of blood in the heart are instrumental in forming the shape of the heart itself. If the spiral flow is interrupted, the heart will not form. [Lucitti et al]
Cells: Each cell hosts an electromagnetic vortex with the nucleus at the centre. Energy is accumulated until there is sufficient for reproduction to take place. The field at the periphery of a cell, where it meets another vortex has distinctive properties of its own (e.g. large field gradient) which initiate the formation of some sort of membrane.
Red blood cells: The RBC are the embodiment of a torus of electrical vortex flow; the energy field likely preceded the physical shape and acted as a template for its formation. Purcell et al
Vortices are said to form ideally in the proportions of the golden mean (Meyl) and red blood cells are in the same proportion in their healthy state. Deviations from this ideal lead to clumping, Rouleaux formation and impaired zeta potential. Purcell (2)
Nerves: See the zebra fish video above; the nerves develop from the ring vortices that they will eventually conduct.
Arteries: The blood circulates before the arteries emerge, arriving at some least energy route much the same as a river forms its own path to the sea. Thereafter, the flow of the blood forms an enclosing ring vortex and arterial tissue emerges to create the familiar tubular structure.
The brain: is an obvious double torus shape and toroidal fields have been described within.
Fingerprints: The whorls at the end of our fingers look like an emergent effect of some sort of vortex flow.
A physical being then is a refinement of a vortex collective, a teleological modulation of the emergent properties of a vortex field.
Sensory input
The sense of smell: Assumed to be the detection of chemicals in the air, but how does this work? How is molecular detection achieved and how is this converted to a nerve impulse to be transmitted to the brain? The sense of smell is by detection of field vortices (Meyl). Such vortices are produced by the scented material, fly through the air by field propagation and enter the nose. Nasal hairs act as antennae and convert the field disturbances to ring vortices which propagate along the hair to the olfactory nerve and proceed unmodulated to the brain for processing.
The sense of taste: This is similar to the sense of smell except that information enters the small hairs on the tongue (Meyl)
Vision: Photons enter the eye, morph to ring vortices and propagate along the rods and cones. They are filtered for frequency and collated at the optic nerve for further processing before moving along the nerve to the brain.
The binding problem
“The unity of consciousness and (cognitive) binding problem is the problem of how objects, background, and abstract or emotional features are combined into a single experience. The binding problem refers to the overall encoding of our brain circuits for the combination of decisions, actions, and perception.” – Wikipedia
Quite, how are experiences of fundamentally different categories merged together to make a single experience and what is an ‘experience’?
Statements above suggest that the sense of smell is just the input of scalar waves or ring vortices direct to the brain whilst visual impulses are similar structures but modified by the optic nerve. Proprioceptive impulses travel along nerves in the form of scalar waves whilst the geometry and electrical properties of the brain further suggest operation via toroidal electric fields.
Meyl states simply that “the brain is a scalar wave computer” and a stable toroidal ring vortex is surely a good candidate for memory storage, so we have both memory and computation performed by the same structure.
The binding problem is now simplified greatly. We no longer have fundamentally different physical categories of perception to merge together as all perceptual and cognitive information is now in the same format, namely a toroidal electric field complex.
The question is now merely “How do we amalgamate a bunch of ring vortices?”.
One simple answer is to simply push them together. They at least now have the property that such a thing is possible. Again, watch ring vortices in water and you can see them divide into two, merge together, pass through each other or sit side by side whilst maintaining independence from each other.
If olfactory impulses can be somehow labelled as such whilst travelling from the nose and likewise for the other senses then we can imagine that all sensory information can be held on a single vortex structure and interpreted in the brain unambiguously at a later stage.
A single vortex structure holds a single holistic impression and persists as a single memory. The physical vortex can be shrunk to an arbitrarily small size for storage and amplified back up later on for recall.
Defective interpretation (or maybe defective labelling) results in synaesthesia.
The morphology of fruit
Why are fruit the shaped the way that they are? To a large extent an apple, say, is just a bag that expands by filling up with water but that does not explain the presence or location of seeds or the wide variety of shape in other fruits.
The general principles of biological development seem to be:
The basic for development of form is the vortex
Vortices self-organise to form cellular clusters
Emergent properties of such clusters are controlled via a supervening bio-field
Energy is conducted along suitable conduits via ring vortices
In the case of an apple, these principles are easily apparent. Each cell is likely an electromagnetic vortex and these self-assemble into an overall spherical vortex to form the general shape of the apple.
The stalk of the apple is likely wet and conductive on the inside and drier and less conductive on the outside. This is a similar arrangement to an insulated wire or a myelin sheath of a nerve and is ideal for the conduction of ring vortices.
Energy is absorbed in the leaf via the ring molecules of chlorophyll and transmitted alone the conduits of the veins in the leaf in the form of ring vortices. Two such rings meeting at a confluence will easily merge to form a larger, more energetic ring which continues into the leaf stem and eventually to the woody material.
Some energy makes it to the trunk of the tree and is instrumental in raising the sap to heights hitherto unexplained by capillary action alone.
Some energy makes it through the stem of the fruit to enable the necessary production of sugars etc.
An overall vortex flow helps control the shape of the growing apple and some energy discontinuity tells it where to manufacture the tissue to form a skin. Other energy spirals inwards to concentrate at the centre of the apple where the flow breaks down into several smaller vortices to supply the energy and information required for the formation of the seeds.
A strawberry has a clear vortex structure at its centre. Energy is transmitted as a ring vortex along the stalk and then discharged from the cone-like vortex through visible filaments to supply individual seeds with energy.
Similar arguments apply to blackberries etc. where the fruit as a whole can be seen as an energy distribution system, concentrating energy via the vortex principle into the valuable seeds and thereby ensuring a new generation of plants.
As for oranges, compare Meyl’s drawing of the electron shell of Neon with the arrangements of segments in an orange. Electrons are the simplest form of field vortex and have arranged themselves in alternate polarity with clockwise spinning electrons nested between two with anti-clockwise spin.
An even number of electrons is mandatory for stability and with oranges we find that an even number of segments is preferred but not strictly necessary.
Konstantin Meyl
When things ‘go wrong’ with the formation of an orange, we do not see complete chaos but instead a cellular order is preserved. The basic laws of vortex physics are still in force and segmentation still occurs as a foundational phenomenon but has not been organised effectively by the supervening bio-field.
This is more evidence that morphogenesis is accomplished by a subtle ‘tweaking’ of the more basic properties of cellular structures i.e. those that arise out of simple emergence.
The emergent properties are robust and closely aligned to the Laws of Physics. However, they are organised by what might be termed subtle energies whose laws will likely remain a mystery for a long time, as the only effective way to decipher such forces is by observing their effect on the emergent properties of biological systems that they themselves were designed to organise. This is the only environment in which they may gain meaningful expression.
To study morphogenesis then, look for cellular organisation via vortices and study what happens when it goes wrong.
A general principle of biological organisation
The patterns mentioned above seem to be repeated again and again.
A supervening biofield acts, not directly upon the physical matter of the cells but instead on some other emergent field that arises from the self-organisation of the local cellular fields.
The cells themselves emerge from and are maintained by, the forces arising from electromagnetic vortices. It is these strong forces that interact with the biological matter to form physical bonds and tissues.
The fields organising such cells must themselves form an emergent biofield that presents a receptive interface or antenna to higher order fields thereby enabling a top-down organisation to take place.
Connection to the cosmos
We are regulated by electromagnetic vortex fields and we live between the twin capacitor plates of the Earth’s surface and the ionosphere It is therefore pertinent to ask as to the nature of the electric field between these plates. Conventional wisdom declares that a uniform field exists together with a slow steady discharge of electric current.
The image below, however, suggests otherwise. A capacitor has been set up and left to discharge for 40 hours. A circular pattern results, suggesting that the discharge is of a helical nature and that a vortex field exists between the plates. Yializis et al
Meyl: About vortex physics and vortex losses
Scientists mapping data from radio telescopes are starting to find huge electric ring vortices in the atmosphere with ‘footprints’ at the Earth’s surface.
The vortices are part of the Earth’s magnetic field and as such can be expected to follow the same patterns of latitude and seasonality and to respond to solar magnetic disturbances in some way as energy from the sun impacts our magnetosphere and is absorbed, modulated or even amplified by these structures.
Many scientists have found links between cosmic events and biological metrics but have been puzzled as to the mechanism, thinking that somehow the orbits of the moon, Saturn or even Mercury are somehow be affecting life on Earth by exerting a gravitational influence on our constituent atoms.
More likely it is electromagnetic field disturbances which propagate through space, are received by field vortices in our atmosphere acting as antennae and make their way into our regulatory systems.
Frank Brown found that all forms of life would apparently synchronise their activity to rhythmic events in the cosmos but could not work out the mechanism. Various inbuilt phase responders are somehow sensitised to the orbital movements of the planets, will ‘resonate’ in step and will then trigger innate behavioural patterns such as feeding or mating.
Electromagnetic fields were suspected, but Brown’s work seems to be largely neglected by the scientific community, presumably because the lack of a credible mechanism causes them to distrust the actual results. However, the assumption of a structured vortex field regulating the body together with recent discoveries concerning the Earth’s magnetic field now make such phenomena seem completely natural, with only the details to be worked out.
Similarly, Simon Shnoll, Giorgio Piccardi and others found that quantifiable processes in biology, chemistry and physics varied with planetary alignments and phases of the moon.
Such connections to the cosmos are not always beneficial, however..
Implications for health and disease
Many diseases, even heart attacks, show seasonal variations: Seasonal disease. The epidemiology of influenza in particular has been well studied and found to demonstrate strong patterns associated with season, latitude and sudden changes in temperature, humidity and pressure.
This is a strong indication that the Earth’s magnetic field is somehow responsible for influencing the bio-field of the body and thereby contributing to the altered regulatory state that is described as ‘influenza’. See: Influenza and weather
When viewed from the perspective of electric fields, there is no clear separation between the bio-field of a human and that of the surrounding cosmos. Energy and ‘information’ travel seamlessly from the solar surface to the Earth’s magnetosphere and thence to individual organisms via a variety of energetic filaments and vortices.
The activity of such vortices shows stable seasonal and latitudinal patterns that are modulated by local weather events and as a consequence, disease appears in the population at a time and place that is somewhat predictable from meteorological data.
Researchers from NASA found that the appearance of influenza in each state coincided with precise changes in humidity (Serman et al) whilst researchers in India noted a coincidence with the onset of the rainy season (Parvaiz et al) and those in Myanmar found similar associations between dengue and the onset of the monsoon, (Zaw et al).
The influences seem to have little in common but all are expected as a pressure front approaches. Such phenomena are associated with changes in pressure, wind direction, helical updrafts of air and presumably the formation of electromagnetic field currents.
Electromagnetic vortices were set up in some metallic micro-discs and exposed to electromagnetic vibrations. State changes were observed, i.e. measurable changes in an electric field were induced by the application of another electromagnetic field.
So we now have a potentially useful way of measuring certain aspects of electric fields that may not be available to a traditional antenna. Set up an array of these vortices and see if we can measure fine modulations of the atmospheric discharge.
The array is calibrated to be hypersensitive to certain target frequencies but robust to the measurement frequencies. Vortices are set up close to some critical state and micro-changes in the ambient field will cause a sudden phase shift thereby amplifying the signal. Field modulations of arbitrary sensitivity may be set up depending upon the technology used.
Now if such mechanisms are in place in living systems, we have a biological antenna connecting the bio-filed with the cosmos with the capacity to detect arbitrarily weak signals and to amplify them to something meaningful.
Response strength of individual vortices is decoupled from input intensity to some degree by the critical phase shift, but a continuum of response may be available as an emergent statistical property. There is no need here for magnetite particles or similar to effect signal reception as the vortex field itself is the antenna.
The existence of such vortex fields may well be reflected in the physical matter, meaning there may be physical organelles set up which act as receivers, but we will have to know what to look for and how to measure.
The vortex is the transducer and is powered by an inward spiralling of the Gibbs field. Reception is via ‘vortex resonance’ which allows the filtering of selected frequencies.
The idea of magnetoreception by magnetic particles is problematic. A certain strength of signal will be required to move a molecule to a sufficient degree thereby imposing immediate limitations on what can be detected, and what happens then? A particle moves and induces a small (attenuated) change in the surrounding field (even mechanical waves here are really electro-acoustic) and then what? We are back to trying to detect the resulting field changes and now need some sort of antenna to measure them. We are back to square one!
Best to go straight for bio-field modulation and then try to work out the fine structure of such field.
Inheritance
Certainly some information is passed from father to child and so there is a requirement for a transport format for such information.
An electromagnetic ring vortex would seem to fit the bill. The basic structure is highly stable and energetically persistent and scalable. There is a simple method available to merge information from each parent, which is to simply merge the respective vortices. See: Evolution and Inheritance
The phenomenon of Telegony shows that information can be passed without DNA as a vehicle.
The exact encoding scheme of such information is not known but if we reject DNA as a format then we are not now limited to a few giga-bytes of data. There is no minimal quantum of information in electric fields and so a ring vortex can theoretically carry an arbitrarily large amount of analog information.
Summary
A hypothesis has been presented that an energetic bio-field is responsible for the organisation and regulation of many, if not most, biological processes and that this bio-field is in the form of electromagnetic vortices.
The theoretical existence of such vortices is here merely assumed but adequate support can be found in the works of Professor Konstantin Meyl. Some evidence is presented here for the presence of such vortices in the Earth’s atmosphere and in laboratory experiments. (also Peng)
Vortex fields are not ‘directly’ measurable within biological systems using current scientific instruments beyond a crude representation as an electric current. However, existence of such a field is consistent with multiple observable phenomena which are currently unexplained by modern science and whose presence in many cases seems unlikely to be understood in terms of the interactions of molecules alone:
The general organisation of biological systems
Existence of suitable conduits for ring vortices
A requirement for centripetal movement of energy within living systems
Vortices in arterial blood flow
The emergence and self-organisation of cellular masses
A measurable bio-field external to the human body
The efficiency of energy transfer within biological systems
A video of the development of a nervous system from scratch
Obvious vortex patterns reflected in morphology
The hypothesised transmutation of elements from one to another
The recognition that some organisational principle must exist independently of the material it organises and prior to the act of that organisation. This is true for general maintenance, embryonic development and the actual origins of Life.
These phenomena seem adequately explained merely by the recognition of the vortex principle in electromagnetic fields. Aside from this there is no need for additional exotica such as quantum coherent domains, cold vortices, extra dimensions, quantum entanglement, randomly vibrating molecules, multiple universes or separate realms consiting entirley of ‘consciousness’.
There is no need for abstract definitions of disorder as ‘entropy’ or of order as ‘negentropy’ and no need for a formulation of information as separate from the rest of physical space. Indeed, Konstantin Meyl has stated: “Information is the structure of a scalar wave“.
We can look forward to a return to just Plain Old Physics as a way of understanding the physical universe.
Potential vortex, newly discovered properties of the electric field are fundamentally changing our view of the physical world – Konstantin Meyl https://www.meyl.eu/go/indexb830.html
Local modulation of neurofilament phosphorylation, axonal caliber, and slow axonal transport by myelinating Schwann cells – de Waegh, Brady https://pubmed.ncbi.nlm.nih.gov/1371237/
Intracardiac fluid forces are an essential epigenetic factor for embryonic cardiogenesis Authors: Jay R Hove 1, Reinhard W Köster, Arian S Forouhar, Gabriel Acevedo-Bolton, Scott E Fraser, Morteza Gharib https://pubmed.ncbi.nlm.nih.gov/12520305/
It appears that there is no diffusion of gases through the lining of the lungs but that energy in the form of electromagnetic vortices is transferred from the oxygen gas in the air, directly into ADP molecules in the bloodstream.
The ADP molecules flow to the cells and this energy is used to facilitate cellular processes.
There is no transfer of oxygen gas from the air to the bloodstream.
Mainstream view
The accepted narrative is that oxygen gas is inhaled into the lungs whereupon some of it diffuses or otherwise passes through the lining of the lungs, through the capillary walls and into the bloodstream. This oxygen reacts with carbon to release the energy used by cellular processes and carbon dioxide is produced as a waste product. This CO2 then passes back through the lungs past the incoming oxygen and is expelled as we exhale.
Problems with this idea include:
No credible mechanism is described by means of which oxygen passes one way through the membranes. CO2 moves in the opposite direction and nitrogen is prevented from moving either way; but how?
Fish manage to breathe somehow despite having no access to gaseous oxygen. The assumption that gaseous oxygen and the dissolved version are pretty much identical is simply not justified.
Gaseous oxygen molecules are actually quite huge (see below) and are if we are to believe that they do indeed pass through a biological membrane then we will need some actual evidence for that.
Techniques for measuring the proportion of oxygen and CO2 in exhaled air do not take into account the possibly altered state of the oxygen itself and in addition adhere to an outdated theory of gases.
A better explanation is available.
Meyl’s hypothesis
Professor Konstantin Meyl describes a gas as consisting of molecules where the electrons have come out of their n=1 orbital and formed a ring around the outside of the rest of the atom. The reduced field strength here has enabled them to expand to some 30,000 (!) times their original dimensions.
The diagram depicts a gaseous oxygen molecule comprising an O2 ‘nucleus’ surrounded by 8 electrons in a ring.
Each electron has its own electric field spin and this results in a magnetic dipole for each particle. The electrons stick together via the magnetic field and are kept apart by the electric field..
The electrons have their own local spin and the ring will in addition rotate as a whole. Al this spinning constitutes ‘energy’ and the system is able to accumulate energy from the outside, store it and release later it as conditions permit.
This expanded molecule has its own magnetic dipole and will thus adopt a specific orientation with respect to other gas molecules in accordance with the laws of electromagnetism; gas has a structure.
This model provides a nice explanation for Avogadro’s Law and Meyl actually derives Avogadro’s constant from theory in the video; it had hitherto been thought of as a fundamental constant of the universe to be approximated only experimentally
Respiration
The vortex energy (field rotation) from the spinning electron ring is transferred to the rotational energy of a phosphate group of an ADP molecule in the blood. This ADP is carried away to the cells where it can release the energy to do something useful.
The oxygen in the lungs is now energetically depleted somewhat and is exhaled.
Constant breathing of the air will cause it to lose more energy still, making it ‘stale’ and unhealthy (What causes pneumonia?). Repletion from the atmosphere is needed.
“Proof is provided, for example, by top athletes who give off significantly more energy than they absorb through food. Similarly, migratory birds on a non-stop flight violate the law of conservation of energy . What they eat in addition to food obviously comes from the air.” – Meyl (Die-Covid-Falle)
Microwaves and 5G
Exposure to microwaves at around 60 GHz is conjectured to interfere with the transfer of energy, leading to the possibility of whole flocks of birds to fall from the sky and the spontaneous collapse of Chinese citizens coinciding with the rollout of 5G in Wuhan.
“First of all, the advantages of the respiratory system for rural dweller should be emphasized. It is insensitive to static and low-frequency interference. Even high frequencies up to 1 GHz only have a minor influence. However, high and maximum frequencies above 2.4 GHz are used in mobile communications.
“Extreme frequencies, such as microwaves and above, can disrupt or hinder the rotation of gas molecules. There are speculations about 60 GHz, at which spontaneous death can occur under certain circumstances. If at a certain maximum frequency the gas ring can no longer be absorbed by the phosphorus tail of the ADP, then we immediately no longer get any energy.
“In this way, in tests, entire flocks of birds have been taken out of the sky during flight. You have the deadly frequency switched off again as quickly as possible. There was silence about this and the crime against nature was covered up. The telecom industry has it individually left to brave citizens to report on the murder of the animals in alternative media in order to then denigrate them as crackpots and conspiracy theorists” – Die-Covid-Falle
ADP/ATP cycle
Mainstream opinion is that ADP is converted to the higher energy molecule by the addition of an extra phosphate group and that the loss of this and consequent conversion back to ADP is a source of energy for the mitochondria.
Meyl, however is claiming that ADP and ATP act independently as vectors for vortex energy, with ATP managing to acquire extra rotational energy owing to the additional phosphate group. Mainstream has the energy stored in ‘bonds’ whereas Meyl has it in ‘rotational energy’.
There appears to be no need for ADP and ATP to be continually transforming from one to the other.
“In my opinion, I would like to conclude by saying: After the rotation and transport have been transferred, the mitochondria undergo refining and the rotation of the ADP is taken over by the ATP. The ATP molecule has a tail that is longer by one phosphorus.
“Now the transport continues to the muscle cells, the heart muscle and the thinking apparatus.
“The rotation is used as needed. That is why ATP and ADP with a lot and a little rotational energy can be detected in the blood everywhere.” – Die-Covid-Falle
This all sounds entirely reasonable and in tune with the laws of physics, so why have we believed for so long in the oxygen/CO2 cycle?
Exhaled air
Conventional wisdom says that exhaled air contains less oxygen that inhaled air and in the same percentage as the increase in carbon dioxide. All sources seem to quote the same figures although finding a decent experiment that proves these has proved problematic.
The coincidence of proportion is not actual proof by itself of transfer across a membrane and we can certainly question the accuracy of these results..
Measuring CO2
The proportion of carbon dioxide in the air is commonly measured by the amount of infra-red absorption. This is no doubt fine if the only thing that has changed is the amount of CO2, but here we are measuring air that has been exhaled.
Such air may well contain less oxygen (not according to the above), may well contain more moisture and in any case contains oxygen that has been depleted of energy.
It doesn’t seem unreasonable that depleted air may well absorb more extra infra-red radiation simply because of that fact, that it is low on energy and in a more ‘receptive’ state.
Measuring oxygen levels in exhaled air is, if anything, more complicated than measuring CO2 as multiple factors such as temperature, humidity and pressure will affect the result.
The oxygen content is not measured directly but is calculated according to some formula that assumes the Theory of Ideal Gases, uses some empirically derived ‘constants’ and is relative to a ‘calibration’ value.
All fine except we are now saying that the oxygen itself can be in a markedly different state in exhaled air and, moreover, that the fundamental concept of gases is now highly deprecated. We are therefore justified in adopting a highly sceptical attitude towards existing techniques.
Fish
Fish are clearly getting their energy direct from the water somehow whether it comes from the dissolved oxygen or not.
The architecture of the gills is markedly different from mammalian lungs and this reflects the difference in viscosity between water and air. Water will not circulate properly in lungs and will not empty properly upon attempted exhalation.
The continuous flow of water through the gill structure is an obvious reflection of this.
Summary
We now have:
A credible mechanism by which energy passes from the atmosphere to the bloodstream
For the first time in history, a theory of gases that adequately describes the familiar and observable phenomenon of ‘pressure’
A theory of gases that is consistent with Avogadro’s and other empirical laws
A derivation of Avogadro’s number, a supposedly ‘fundamental’ constant
A mechanism by which 5G and other microwave technologies can directly affect physiological processes
An alternative and believable explanation for the ADP/ATP energy transfer
References:
Potential vortex, newly discovered properties of the electric field are fundamentally changing our view of the physical world – Konstantin Meyl https://www.meyl.eu/go/indexb830.html
There is no such thing as static electricity as commonly imagined and even descriptions from mainstream science are self-contradictory. All electromagnetic fields are composed of ‘living’ filaments of spiral field vortices which propagate at the speed of light and contain their own ‘energy’.
‘Movement’, i.e. field movement is intrinsic to electromagnetic fields, the vortices want to go somewhere, meaning nothing is ever truly static and the field itself can act as an energy source.
Field propagation is at the speed of light as with photons, but the propagation speed of a field vortex will depend upon the pitch of the vortex or the exact characteristics of the ring structure.
A conventional static field is a conglomeration of moving vortices. However, this fine grained structure has been missed owing to the crude nature of the measuring instruments and the unquestioning acceptance of an over simplified and inconsistent theory.
Classical theory
The classical model of an electrostatic field is based upon the idea of a ‘charge’ (an electron) and an associated ‘force field’ which adopts a radial configuration (right) and obeys an inverse square law out to an infinite distance: Coulomb’s law
For most practical purposes this seems to work but consider what happens if a shield of lead (Pb) is applied to eliminate the field and then removed; the field disappears immediately and is then instantly renewed. Coulomb’s law should still hold but this means that the field should come into existence again all the way to infinity in no time at all!
Physicists know this and know that it is impossible but seem to think that when the shield is removed, what happens is that the field somehow repairs itself from the charge outwards, radiating to infinity at the speed of light, whereupon it knows to stop and stabilise in order to re-establish Coulomb’s Law.
So the field has ‘moved’ outwards (i.e. it is not static) and it has originated from a small charge which never seems to run out of ‘field substance’, never runs out of energy to renew an infinite field in an instant and maintain it indefinitely.
Similar concerns apply to what happens if an electron is moved. In this case, in order for Coulomb’s law to hold, the entire field all the way out to infinity must also move with it.
This is inconceivable to sane people and Newton had similar concerns about the nature of gravity. Any instantaneous action at a distance is in any case a contradiction of the principles of special relativity and so classical physics and relativity are at odds with each other. They cannot both be true at once and the absurdity of the standard description means that classical electrostatic theory at least is flawed and even inconsistent with Newton:
“This form of solutions need not obey Newton’s third law as is the case in the framework of special relativity (yet without violating relativistic-energy momentum conservation)” – Wikipedia
No charge!
Classical theory relies heavily upon the idea of ‘charge’ as being the source of electric fields, but charge as such does not exist and attachment to this concept has proved to have a stifling effect upon improving electrostatic theory.
How is charge measured? How do we know it exists? It has not been described directly but we ‘know’ it exists because we can measure the forces exerted by it and then use Coulomb’s law to calculate the amount of charge that must have created such force.
This is very obviously a circular argument: “Charge creates force so any observation of a force is proof of the existence of charge”. Clear bunk.
Vortex theory: the electron
According to the vortex theory of Konstantin Meyl, an electron is merely the ‘vortex radius’ of a spherical-toroidal shaped electromagnetic field. The vortex was created from an extended field of an arbitrarily large size which continues to morph, mutate and expand throughout the cosmos.
The field has energy of its own and is self-maintaining by itself but in practice will interact with the local field structure, whether this be within an atom or in the ‘void’ of space. Measurements of the field around a particle will imply a spherical structure and lead physicists to infer the existence of ‘charge’ because that is what their theory says.
Within this framework, the whole of ‘matter’ is described as field structures and the only ‘forces’ available are electromagnetic forces. Therefore, the only way to move an electron is by the application of a motivational field. Such field will interact with the field surrounding the electron and the effects will spiral inwards towards the vortex centre of the little ‘particle’ thereby causing movement of the vortex.
So here it is the deformation of the field that leads to the movement of the ‘charge’ and not the other way around.
In vortex physics, the field is the primal cause and the illusion of matter is a downstream effect. Classical physics tries to have all this inverted, with ‘matter’ or the ‘properties of matter’ (charge) as the origin of force . This just leads to confusion.
Charged objects
A single electron takes the form of a single spherical vortex structure but a charged object such as a balloon or a charged metal sphere is a different matter.
The top of a Van de Graaff generator is a conductive ‘sphere’ filled with electrical eddy currents. These are field vortices that are not stabilised into electrons or positrons and are free to mutate into different configurations as conditions allow.
Vortices move and propagate, they move to the surface of the metal via mutual repulsion and form a ‘layer’ owing to the difference in conductivity between the metal and the surrounding air. The fields act as accumulators and gather sufficient energy to propagate into the atmosphere, possibly taking on a slightly different configuration appropriate to the ambient conditions.
A radial field of electrical filaments emanates from the sphere and propagates outwards to infinity. Measuring devices will take an average over a relatively large area of this field and conclude a ‘potential’ that diminishes according to an inverse square law.
The sphere is distributing energy and so the field is diminishing accordingly. This is interpreted in the mainstream as ‘charge loss’ i.e. the loss of actual matter (electrons or ions) from the object! A pattern to look for in physics is the offhand dismissal of ‘losses’ and ‘noise’ as if these things need no explanation, as if the laws of physics do not apply here. By ignoring inconveniences, the impression is created of a consistent theoretical framework when nothing of the sort exists.
Coulomb’s law (vortex interpretation)
So measurement of field strength (electrical potential) is really an average of the effects of field vortices and this will approximate an inverse square law according to geometric considerations alone; the filaments spread out over a greater volume of space and this is sufficient to produce the law.
Now consider shielding with a lead cage and then removing it suddenly. The eddy currents propagate outwards at close to the speed of light depending upon helical pitch and the field is renewed in due course.
Theoretically the propagation is out to infinity, but it doesn’t ever stop as infinity is never reached and in any case the field in this case is emphatically not static but in a state of continuous radiation with continual concomitant ‘losses’ from the charged sphere.
If the sphere is moved suddenly, then field effects will propagate outwards similar to the way that waves and eddies spread from a stone tossed into a whirlpool. Field propagation itself is at the speed of light but emergent effects will move at different speeds according to their dynamic geometry. The field ‘travels’ it is never static.
Friction
If two substances are rubbed against each other, the atoms do not actually ‘touch’ each other as such an idea does not make sense in either classical or vortex physics. Do atoms ‘touch’?
Instead what happens is that the vortex radii of many electrons will come very close together, creating very strong field interference. The kinetic energy of rubbing is converted to vortex energy in the atomic structure and the associated ‘field drag’ is experienced as a resistance to movement, i.e. ‘friction’.
We now have an excess of vortex energy over baseline and eventual destabilisation will lead to several phenomena:
Transfer of vortex energy throughout the medium is known as heat diffusion
The reconfiguring of a vortex into a plain old photon in the infrared range
The dissipation of electric eddy currents away from the material
The third of these being what is called the triboelectric effect . A ‘static’ electric field has been created without the need to rip an atom apart by stripping electrons from the outer shell.
The Van de Graaff generator
We are now ready to tackle this complex subject. This is simply not understood by the mainstream even according to their own account.
In their version, positive ‘charge’ is created either by extracting protons from the centre of atoms or by stripping off electrons to leave a positive ion. Both these are ‘matter’ and they move obligingly around the circuit, are recreated at (2) by field induction and gather on the outside of the metal shell where they create the infinite field in an instant and then leak away to the surrounding air. The metal dome is continually losing material substances but never seems to shrink, run out of ions or develop an sort of film at the surface. Very suspicious.
Eventual spark discharge is via ionisation of the surrounding air in accordance with an enormous ‘voltage’; yet another breaking down of atomic structure.
The explanation from vortex physics is still not simple but really only involves a single process, which is to say the transformation of field vortices from one semi-stable structure to another depending upon the local conditions.
The rubber band is an insulator and so favours, (via the triboelectric effect) a vortex of magnetic potential over a vortex of electrical movement. Movement is intrinsic to field vortices which aids in self-organisation of self-similar structures.
The field structures from the band will form particularly around the top of the wheel where there is elastic deformation of the rubber. They will transmute into positive electrical vortices at the surface and propagate through the air to the graphite brushes.
More propagation is guided by the strong conductivity of the metal dome and eventually field propagation occurs from the dome to the outside atmosphere as described above. No flying matter is needed and no ‘field induction’.
Spark discharge may well be accompanied by ionisation but this may not necessarily be the cause. If the field is uniform then why is the discharge so localised and why does it often take on a vortex shape (right). Discharge is via field vortex and the centre of the vortex increases field intensity which leads to ionisation.
So it is the field vortex that precedes and therefore causes the ionisation and not the ionisation that somehow forms vortices as it breaks down the air molecules.
The huge voltages claimed therefore may not be real but may be local effects combined with measurement artefacts. In any case, a ‘voltage’ is the result of averaging over millions of smaller field phenomena. It may even be that vortex filaments are attracted towards the measurement instrument!
The Earth’s electric field
The Earth’s electric field is likewise not static nor uniform. Discharge from the ionosphere is in the form of field vortices and it is these that can affect the electromagnetic bio-field of organic life forms, having some beneficial effects in maintaining bio-rhythms and some detrimental effects in promoting disease. See: Influenza and weather
Wikipedia
Coulomb’s law is described by Wikipedia both as ’empirical’ and as ‘fundamental’ at the same time which does rather highlight the confusion over the whole idea.
Concluding remarks
The idea of a static field and the requirement that it must come from ‘charge’, that it is inextricably dependent upon ‘matter’ may be good enough for many practical purposes, but is not theoretically tenable and therefore unsuitable as a foundational concept in physics.
It may be framed as merely ’empirical’ but is invariably regarded as ‘fundamental’ and immutable in the absence of anything better. This attitude has proved quite crippling in terms of making any sort of advancement in a wide area of theoretical physics and has resulted in the workings of biological systems seeming utterly incomprehensible.
This has led to many serious researchers concluding the existence some sort of vitalistic force in living systems. They are not wrong. The élanvital is nothing more or less than the organised movement of field vortices as they impact upon biological tissue. The tissue guides the field movement and the field energy ‘enlivens’ the tissue.
The idea of a field that is static, uniform and dependent upon charge should be cast aside in favour of a field that is moving, vortex-like and independent of a material source.
The vortex theories of Konstantin Meyl are not just a speculative adjunct to contemporary science but a necessary replacement for many areas.
Legacy biology claims that aggressively reproducing bacteria are responsible for cell death in the lung tissue. The body tries to frantically repair the damage whilst the immune system is responsible for killing off the bacteria at the same time.
The New Biology paradigm is happier with the idea that it is the tissue that dies first and that the bacteria are not causal in the process but are merely opportunist scavengers that live off dead tissue.
But what causes the tissue necrosis in the first place and why is the lung tissue seemingly more susceptible to this type of disorder than other parts of the body? Why is pneumonia common in hospitals with supposedly strict hygiene protocols and why does it seem to be a progression of other respiratory conditions such as influenza. Why don’t the nurses ‘catch’ it?
First consider that the lung tissue needs a continuous supply of energy in order to maintain it. This is assumed to come from oxygen in the blood delivered via the capillary system. The job of the lungs though is to absorb oxygen from the lung cavities and deliver it to the rest of the body and this is achieved via a separate capillary system, the pulmonary capillaries.
The coexistence of two such systems is a complexity not seen in the rest of the body and I will guess that this restricts the number of maintenance capillaries somewhat thereby making the whole system a little delicate and meaning that any extra input of energy in this area would be most welcome.
Konstantin Meyl has stated that such an additional input exists in the form of electromagnetic field vortices which are transferred from fresh air through the lung tissue directly to the bloodstream. Air that has been breathed and had insufficient time to recover is depleted of vortices and depleted of energy.
Gerald Pollack has written a paper going a step further, claiming that there is no exchange of oxygen at all in the lungs and that all energy input is via electrical energy.
Hypothesis: This energy is not merely necessary as an input to the bloodstream, but is vital for the maintenance of local lung tissue. These vortices will be absorbed directly into the lining of the lungs and assist in maintaining healthy cells. Exercise will increase breathing and proportionately increase energy intake. The inhalation of stale air will reduce energy intake.
We can see now the possibility of necrosis prior to bacterial proliferation.
An already weakened patient is confined to bed and immediately suffers a decrease in energy input to the lung tissue and in due course the intake of stale air further reduces available vortex energy.
Nurses and carers do not succumb as they are walking around, breathing more air and not spending 24 hours a day inhaling ‘dead’ gases.
The disease seems to be a progression of a viral infection but it is a consequence of bad treatment instead.
Treatment
If the cause is a lack of energy in the air then we should expect that the treatment should consist of .. exposure to fresh air!
“Our systematic practice was to put all pneumonia patients during the day, for six hours, on the roof, in the open air, in all weather in which harsh high winds, rain and snow did not prohibit. Indeed, the patients were not always brought in for little sprinkling rains or trivial snowfalls, and many times were out when high snow banks formed a corral about the space in which the beds were grouped” – Northrup (1906)
“Gradually, after most careful precautions and constant watching, it became the firm conviction of all observers that such patients were decidedly benefitted thereby.”
A couple of videos from Rupert Sheldrake concerning the abilities of homing pigeons provide convincing evidence of our ignorance of this phenomenon. It isn’t just that nobody has any idea how it happens but that there doesn’t seem to be any chance at all that we could describe it in terms of any currently known scientific theories.
Points of interest:
Released pigeons typically fly straight home
A pigeon separated from the flock can still get home eventually
They can be blindfolded and put on a rotating table and still get home
They can navigate on a completely overcast day
Connection to the environs of the loft as opposed to the construct itself
A flock can, however, find its way to a moving loft on a ship at sea
Trans-generational communication of migratory patterns
All ‘reasonable’ mechanisms have been ruled out
There appears to be no explanation for these phenomena in terms of conventional science so we need to look further afield:
Theoretical constructs from vortex physics (Konstantin Meyl)
Evidence showing effects derived from the theory
Similar ‘patterns’ of geomagnetic awareness from Frank Brown
In videos such as the one below, Konstantin Meyl has demonstrated the transmission of power via Scalar Waves, also know as Tesla Waves, and theorised that this set up can be used also to transmit information.
The waves are electromagnetic in nature
Connection is one-to-one between the metal spheres
Once a connection is made there is no power loss
There is no inverse square attenuation of the signal
Such signals are unaffected by ‘matter’ and can tunnel through the Earth
The connection itself can absorb energy from solar neutrinos leading to more power arriving than was originally sent
So an obvious hypothesis then is that the pigeons are somehow communicating with each other at least via this system. A connection, once established, is robust and distance is not an issue as the field itself is self-maintaining via the absorption of external energy.
Pigeons will be able to communicate over the horizon easily enough. The signal does not bend around the Earth however but simply tunnels through it; any pigeons left at home will act as a beacon for the displaced flock.
The brain
Meyl has stated simply that “The brain is a scalar wave computer” and that the nerves are scalar wave conductors. The waves are magnetic in nature and travel in the insulating myelin sheath around the nerve, with an electrical component travelling down the conductive body of the nerve.
This electrical component is a pale reflection of the true nature of the signal but it is this ‘current’ that has been assumed to be the only relevance to the functionality of the nerve by modern science. The structure of the magnetic part is the actual carrier of the information.
No transduction of energy or information is therefore required for this kind of telepathy as the electromagnetic activity of the brain is transmitted unmodified through the air using the same medium as the brain itself.
The phantom leaf effect
A leaf placed between two layers of plastic will leave behind some sort of ‘imprint’ that can later be photographed under a strong magnetic field.
What has happened is that vortex energy from the living leaf has moved to the plastic sheet, which, being an insulator will favour the stabilisation of such energy into magnetic scalar waves. The electrical component has been minimised owing to the poor conductivity and a magnetic vortex system remains.
My suggestion is then, that something similar happens with pigeons, that a whole flock will leave some sort of trace upon their environment and it is with this imprint that a connection is maintained thereby enabling an accurate homing navigation.
Connection to ‘place’
Pigeons whose loft was moved whilst they were away, first returned to the original location of the loft and not the loft itself, which suggests that the connection was maintained, not with the dead material substance of the loft but with the living ‘field’ of the forest surroundings.
Other snippets, however, have lofts on the roofs of high rise flats or on a ship at sea. Different materials have different conductive properties and different structures of scalar waves may form. Since the connection itself is absorbing neutrino power, it is conceivable that the integrity of the transmission be maintained in such a fashion.
The nature of the connection
The connection is that of one electromagnetic field to another. The brain works via a set of nested toroidal vortex fields and directly absorbs similar energies from the environment.
In one video it is suggested that magnetic particles (i.e. ‘matter’) are required in order to detect the Earth’s field but this is not necessary; magnetic vortices will enter the field of the brain and have a direct effect on its operations. If there are any magnetic particles that are coerced into movement by magnetic forces then the only way that the body can detect such movement is via its effect on an electromagnetic field anyhow – so why did we need particles in the first place?
Field information is absorbed directly into the brain with little need for translation or interpretation.
So the whole of brain field itself is the antenna for the reception of electromagnetic field activity and no specific organ is needed for this function. How would it work anyhow? It would still need to have some means of collecting information and this will be an electromagnetic field complex.
There is no need to interpolate ‘matter’ in the middle of electromagnetic field interactions and in any case it is too crude a substance to play any part in conscious activity.
An extended consciousness?
The energy field of the physical brain is said to be measurable several feet away from the head and since this field is now almost synonymous with the ‘etheric’ brain itself, it maybe isn’t too fanciful to ask if this extension of the energy field might have some practical purpose.
The physics espoused by Konstantin Meyl allows for far more complex behaviour in electromagnetic fields than that of classical science. ‘Movement’ is intrinsic and the field structure has a tendency to form spiral structures. Energy and information are guided towards a vortex centre and the second law of thermodynamics is inverted. A concentration of energy takes place alongside the more familiar dissipative structures and all of this is highly propitious for the formation and maintenance of living systems.
Consider then that information external to a pigeon’s physical brain is caught in its brain vortex and will then spiral inwards towards the physical bird. We than have an antenna that is considerably larger than a tiny bird brain and the whole concept is starting to sound more likely.
A tadpole had its eyes taken out and grafted onto its hindquarters (mentioned in a paper by Michael Levin) and after recovering from the shock could navigate its surroundings quite happily. So it doesn’t seem to matter how the information gets into the body; it will be processed correctly nevertheless.
The bio-field of the heart is much larger than that of the brain so we can maybe think of this also as a receiver of scalar waves. Energy can radiate outwards at the same time as information spirals inwards; the whole of a pigeon can be considered as a scalar wave antenna.
Watch a single celled organism find its way around a microscope slide in order to chase down food. It has no sensory organs no brain and not even a nervous system but is still aware of what is going on and manages to coordinate its movements accordingly.
A hive mind?
If a whole pigeon is a sensory system and pigeons are in constant communication with each other via scalar waves then what happens when they all gather together?
Is it in any way possible that the flock as a whole now forms a collective bio-field? A ‘hive mind’? Such a thing would surely increase both the power and sensitivity of the field. Being spread out over a greater volume it would have the capacity to receive a much weaker signal simply by collecting more of it.
In one study the behaviour of a termite colony differed depending upon whether or not it was separated from another colony by an aluminium sheet, suggesting some electromagnetic connection between the two groups. See: Distant cellular interaction
What happens within a murmuration of starlings? Are they merely exercising their wings prior to migration or are they creating a semi-permanent hive mind in preparation for navigation? A coherent field is formed that connects all the birds and this not only acts as an antenna but also a collective memory and possibly even has its own independent computational capacity.
The idea that this sort of disembodied mind could even exist will cause some to recoil I know, but the actual mind is disembodied, in a sense, anyhow as it is really just an electromagnetic field whose machinations are decoupled from the physical structures of the brain.
Again, if anyone thinks that the idea of a ‘consciousness’ emerging from the mere proximity of bird brains should reflect that the mainstream concept of consciousness is just this: an emergent property of the proximity of cells! If electrified jelly can make decisions then so can a connected set of pigeon brains.
Pigeons don’t need murmurations as they all live in close proximity anyhow.
Classical physics
Note that the above speculations are not even possible with classical electromagnetism. Here electric fields are either static, meaning they have no movement and don’t go anywhere, or they are photons which means they must necessarily shoot off at the speed of light in a straight line.
Neither of these configurations suggests the possibility of a self stabilising complex of vortex fields that can retain information whilst renewing its energy from external sources.
Again, the classical concept of electric currents is that of moving charge (electrons), which relies upon the idea of a voltage to push the tiny particles around as they have no motive energy of themselves.
This idea is just not very useful in any area of biology. Better is to think of circuits comprised of ‘field movement’ forming closed loop and helical vortex structures according to the updated Maxwell-Heaviside equations of Konstantin Meyl.
Vortex energy
Where do migrating birds get all their energy from? It does sound incredible that sufficient energy is stored as fat in a small bird and so we should consider Meyl’s idea that they are breathing in electromagnetic vortices along with the usual oxygen supply and that this is being somehow being used in mechanical action to aid flight.
Gerald Pollack has written a paper giving credible arguments to suggest that breathing has not much to do with oxygen anyhow and that in fact there is no gaseous exchange in the lungs at all! Pollack is suggesting an input of electrical energy in the form of electrons. However replacing ‘electron’ with ‘field vortex’ makes for easier reading.
Questions: Can this vortex energy enter the body via any other means than the breath? Is it possible that the general discharge from the ionosphere could be gathered by the collective flock vortex? Could this help to maintain the field and could some of that energy enter the body of a migrating bird to help it in its flight?
Inheritance of migratory paths
Inheritance of acquired characteristics does exist and has been demonstrated in laboratory experiments.
In one example, rats were made to fear the smell of cherry blossom and their offspring inherited the fear. In another, a caterpillar was trained to crawl towards a red circle and the behaviour was inherited by the emergent moth. Behavioural pattern such as this have been transferred from one snail to another simply by injecting material from one animal to another (Michael Levin).
This all works because inheritance has nothing to do with DNA (See: The DNA delusion) but everything to do with the transference of a scalar wave complex from one generation to the next (See: Telegony and Evolution and Inheritance).
The rapid ‘evolution’ of bird migration paths is therefore no surprise from this point of view. Memories, intents and complete behavioural patterns are codified into scalar waves and these are precisely the format that is needed for inheritance, persistence and communication between individuals or groups of individuals.
These wave vortices are a biological Theory of Everything.
Navigation by scent
In one video, the idea of navigation via smell is mentioned but discarded because of the observed fact of pigeons homing with the wind behind them. Maybe, maybe not. Most people will assume that ‘scent’ consists of a chemical discharge but evidence and argument suggest otherwise: Scalar waves and nerves.
Scent is conveyed via scalar waves and is absorbed directly into the olfactory nerve conduit. The possibility now exists of a direct scalar wave connection between scent detector and target, with reduced attenuation, enhanced sensitivity and magnification via neutrino absorption.
Consider the abilities of certain moths to detect a mate several miles away. Can they really detect the direction at this distance by the sampling of molecules or is it rather the case that an essentially electric connection has been formed and that it is this that provides the necessary information? Is it just the physical antenna that are receiving the information or the whole of a bio-field?
Scents can easily leach through a plastic bag. Is this really caused by molecule leakage or by scalar waves tunnelling through an insulator?
The intensity of a small clearly varies with wind direction which does rather indicate that it is emanating from freely floating molecules. However, that does not preclude the possibility of an additional, semi-permanent connection with a fixed source of the scent vortices.
A global navigational map?
Stunning work from Frank Brown demonstrates the ability of various animals, shellfish, plants and bacteria to synchronise to cosmic rhythms.
Organisms seem to know the time of year, day and position within the lunar cycle. They are aware of latitude and seemingly respond to external pressure changes even when kept at constant pressure within a laboratory. Storm conditions are predicted two days in advance using precisely this ability.
Faraday cages reduce these abilities and so the effects are assumed to be electromagnetic in nature. It is quite credible then that a pigeon or a flock of pigeons know quite well what is going on in their locale and exactly how it relates to solar, lunar and weather conditions. This isn’t quite the same as having a static map though and it isn’t obvious that navigation is possible from local information alone.
The point here though is that the Earth’s magnetic field is not just something that points North or South but has local geographic and temporal refinements that carry a large amount of information that has functional interpretations by every organism on the planet.
Scientific instruments are just fancy compasses and do not possess the refinements necessary to interpret such field information. Theories of electromagnetism inevitably reflect the crudity of the measurements that support them and are therefore themselves necessarily oversimplified. The result then is a science that effectively rules out half of the things it is trying to explain!
Connection to what?
Birds whose loft has been moved will initially return to the precise spot where the loft used to be – so what is it about this spot that is so special?
On the other hand, birds released from a ship at sea will return to the current position of the ship – so, again, where is the source of the connection?
The phantom leaf experiment showed a precise imprint of a leaf in some polymer sheets that persisted for only a few seconds, which doesn’t sound like a very good candidate.
Several considerations may be pertinent:
Plastic polymers are electrical insulators which would therefore encourage the formation of scalar waves where the electrical component in minimised and hence the magnetic activity maximised. These are described by Meyl as magnetic potential vortices and are of great biological significance.
Electrical conductors such as the steel of a ship will form electrical eddy currents by a similar mechanism to the above.
Biological systems prefer the magnetic versions of the waves for both internal regulation and the conjectured extra sensory communication. Internal vortices are friction free (no energy loss) and will in any case absorb energy from heat and other sources within the body.
Once a connection has been established, the connection itself will absorb energy along its length from solar neutrinos to maintain itself and will therefore grow proportionally stronger as the endpoints become further apart. (Sheldrake’s elastic band analogy is accurate in this respect).
So it seems likely then that a connection is made with some biological activity in or around the loft. In the case of pigeons released from a rural location this might be the grass and trees of a forest and in the case of birds accustomed to living in a ship or at the top of a block of flats it is the crew of the ship or the pigeon handlers themselves that are utilised as a useful anchor.
The answer to the question of whether atoms (or even objects) actually touch each other is dependent upon what framework is being considered, whether it be classical, quantum or particle physics. The only consistent framework is one that regards the entire universe as modulations of a single continuum, a ‘vector field’.
Classical physics
In classical (Newtonian) physics, space is well behaved and objects occupy well defined volumes, with the gaps in between filled with either ‘gas’ or ‘vacuum’. Things are made of different substances, e.g. glass, metal ,air, water etc. No object can be in two places at once and we cannot have two different solid objects occupying the same position in space. There is no real sense of the objects being made of ‘atoms’ nor what those atoms might consist of.
So imagine a metal ball falling onto a glass table, for example. The table top is at height zero and the ball is at some height, a metre, say.
Now at what point does the ball bounce? What is the height above the table at which the downward speed is zero and it starts to move upwards again? At what point do they ‘touch’?
Does the lowest point of the ball ever reach a height of zero? If the answer is ‘yes’ then the point where the ball touches the table is occupied by both metal and glass at the same time. This is a contradiction of the whole idea of solid, separately defined objects and it is not relevant that it is only one ‘point’ that these objects have in common. The Laws of Physics must apply everywhere or why are we bothering?
So the ball must come to rest at some finite distance above the table and this is the point at which the velocity reverses direction. The two objects never touch; they cannot. A finite distance is maintained between the two materials at all times.
How then does the ball reverse direction?
Energy transfer
Kinetic energy is transferred from ball to table and then transported back to the ball to make it go in the opposite direction. This seems fine but it means that the energy is transferred without contact between the objects, meaning that it must, at some non-zero height, leave the ball and enter the table; it must pass through a finite amount of air (or vacuum) all by itself.
A force field?
The ball is moving in one direction and then turns around before touching the table top and starts to go in the other direction. This is enough to deduce some force of repulsion even without knowing how it should arise. Some kinetic influence is maybe emanating from the table top that repels other objects before they reach it, before they make contact.
This surely implies the spooky action at a distance that both Einstein and Newton disliked.
Classical mechanics is busted?
The idea of classical mechanics arose from an attempt to formulate simple everyday observations in terms of fairly basic mathematical formulae, but as the above shows, we can’t describe ‘bouncing’ or even the idea of ‘contact’ in even a half-sensible manner!
Either classical physics is wrong, the mathematical formulation is wrong, or maybe it is simply not possible to describe reality in terms of familiar mathematics. We don’t even need to try anything ‘fancy’, we get into trouble simply attempting to define the boundaries of everyday objects.
Mathematics
The problem has arisen from defining objects as closed subsets of the continuum, that is to say, as spaces that incorporate their own boundaries. We can try defining objects as ‘open intervals’ whose boundaries are not part of the objects themselves, but this doesn’t really help.
We still can’t have two objects ‘touching’ as they will now always be separated by a single point at least and this point will never be part of either object nor can it be part of any other object. We have ended up with a universe containing an infinite number of empty points which physically separate the objects within it.
Maybe we can do some calculations with this model but it is highly unsatisfactory as a description of the nature of reality. I would contend that this is on a par with quantum physics for boggling the mind.
Field physics
The observation or maybe ‘deduction’ that the ball is repelled before it even reaches the table gives a clue to a better formulation of space and matter.
Even without doing any clever physics we can say that the objects are not separate in space but are part of, or embedded within, some omnipresent force field that controls the movements of even the largest objects and ensures that they conform to some universal organisational principle.
The idea that this force should ’emanate’ from the objects themselves and should affect other objects at a distance is, as Newton himself put it: “so great an absurdity that, I believe, no man who has in philosophic matters a competent faculty of thinking could ever fall into it.” – Newton 1692 – Wikipedia
Vortex physics
The vortex physics of Konstantin Meyl formulates single atoms as spherical vortices in an electromagnetic field. Negatively charged field structures will repel each other, with the force of repulsion increasing with proximity. This force reaches a maximum at the vortex ‘radius’ (shown) and effectively gives the impression of solid matter.
It isn’t quite clear what is meant by ‘touching’ in this respect. Two vortices will repel each other so much that it is unlikely their radii will overlap but if they do then that is fine as the respective fields will simply add together; we do not have separate substances as with classical physics and do not have discrete ‘solid’ particles to worry about as with particle physics.
The whole of the universe is simply a shifting field continuum which is given form by the field structure itself, with the behaviour of vortices giving the impression of solid objects making contact, exchanging energy and bouncing off each other.
To ask if two vortices make contact is to ask whether two eddies in a river will ‘touch’ each other. They can come close and bounce off each other but the idea of touching is somewhat nebulous as neither has a well defined boundary at all and the vortex influence associated with each will conceptually extend to infinity.
A self-consistent physics
Classical physics arose from the attempt to describe everyday observations such as ‘bouncing’ in terms of mathematical formulae but as we see, these attempts have resulted in a lack of consistency in the theory and a mathematical model that makes no sense with respect to reality.
The field physics formulation is counter-intuitive to start with but is self-consistent and in accordance with reality.
Try thinking of this in terms of particle physics or quantum waveforms and the explanations become positively surreal. What does it mean to even ask if two quantum particles are touching each other? Presumably they are and they aren’t, both at the same time!
Empiricism vs aestheticism
Part of the impetus for this post was the question of ‘Do atoms touch?’ and part was a discussion on the question of whether theories should be ‘beautiful’ or whether empirical observations should rule the day. There seemed to be something of a consensus that experiment should overrule theory and that beauty is a mere bonus and then only if you are lucky.
This is all wrong and has resulted in physics that resembles a patchwork quilt, which although agreeing with a large body of experimental data, nevertheless has no proper foundations, a tenuous relationship to reality, multiple of contradictions and really consists of a collection of isolated and inconsistent theories held together with empirically determined ‘adapter’ constants to hold the whole thing together.
The idea that observation and deduction alone are sufficient to formulate a theory is incorrect. Observations are always made with respect to the framework and are interpreted within it so for example the idea that the universe is full of discrete ‘objects’ is already a sort of theory deriving from intuition and observation – but it is wrong!
Any assertion that discrete objects exists needs proving somehow. There needs to be some testable framework that describes these objects and classical physics has failed to provide this at the outset. Any observation of an ‘object’ is now misleading and any science that uses such an ill defined concept is eventually doomed.
Similar considerations apply to modern physics. Observations (data) are interpreted with respect to the model, which itself is held to be correct and can never really be disproved. All that happens is that some ‘fix’ is put in and we end up with something like quantum entanglement and information coming backwards in time from the future.
There is more to a theory than mere aesthetics. It should be self-consistent within itself and consistent with its own predictions, and if it isn’t then it is just wrong and needs to be discarded. In addition to this, it must have some degree of ‘reasonableness’, some relationship to some presumed nature of reality which sounds ‘viable’ at least.
A theory such as quantum mechanics which allows multiple and outlandish interpretations regardless of whether or not they make any sense, surely has no place in scientific discourse.
Newton’s statement deserves reiteration: “That one body may act upon another at a distance through a vacuum without the mediation of anything else, by and through which their action and force may be conveyed from one another, is to me so great an absurdity that, I believe, no man who has in philosophic matters a competent faculty of thinking could ever fall into it.”
This is not theory and not observation and so not really ‘science’ – but it has proved to be correct!
So do they touch?
No. Quite surprisingly, the whole concept of ‘contact’ is not supported by any theoretical framework; it isn’t even possible to define it let alone test for it.
“These are the lies you were taught about electricity: That the electrons themselves have potential energy, that they are pushed or pulled through a continuous conducting loop and that they dissipate their energy in the device. My claim is that all of this is false.” – Derek Muller (Veritasium)
The quotation above is taken from the video below from the Veritasium YouTube channel. This, and the follow up video are proof that electricity does not consist of a flow of electrons. Moreover, they give some clues at least as to what is actually happening. Ideas from the vortex physics of Konstantin Meyl will complete the picture to give a credible explanation for the phenomenon.
At 2:12 we have: “There is no continuous conducting wire that runs all the way from a power station to your house. No, there are physical gaps, there are breaks in the line such as in transformers [..] Electrons cannot possibly flow from one to the other.“
Furthermore, with respect to alternating current: “If the electron flow is in two directions then why does the energy only flow in one direction?“
At 8:20: “People seem to think that you are pumping electrons and that you are ‘buying electrons‘ (from a power company) which is just so wrong. [..] It’s quite counter-intuitive to think that the energy is flowing in the space around the conductor, but the energy which is travelling through the field is going quite fast” – Dr. Bruce Hunt
“It’s the fields and not the electrons that carry the energy” – Muller
11:07 – Under-sea cables that were coated in an insulator and then encased in an iron sheath (for protection) did not perform well.
A transformer (right) consists of two coils of wire separated by a gap. Electricity (whatever it may be) flows through the coil on the left and radiates a field shown as two straight(!) lines which then induces a current by somehow interacting with the wire on the right.
Well the only thing a ‘field’ can interact with is another field. People will say that an electric field can set a charge in motion but a charge is only characterised by its own field and any interaction is totally dependent upon that field.
Assertion: Electricity is some sort of ‘field movement’ within the wire and surrounding insulator. This field extends beyond the wire in some form or other and is able to induce similar movements in the other half of the transformer.
Vortex physics
In the vortex physics of Konstantin Meyl, field movement is described by some slightly modified versions of the Maxwell-Heaviside equations and can adopt several interesting shapes. Helical fields (right) are common, as are ring vortices (smoke rings) and spherical vortices (electrons).
Electric and magnetic fields are inextricably entwined via ‘movement’ at right angles and ‘movement’ is innate to both types of field; ‘static’ fields are an illusion.
Electric fields will propagate easily within a conductor and insulating material will favour the movement of the magnetic component, leading to characteristic patterns of field movement.
The image below is from Viktor Schauberger and depicts the flow of water in a wooden pipe but will serve to illustrate the flow of an electromagnetic field in an insulated wire.
The wire is conductive and favours a helical flow of an electric field whereas the insulating cable favours the construction of magnetic ring vortices. The field vectors for the electric and magnetic fields are at right angles to each other as required.
Similar patterns are observed in the flow of blood (a partially ionised fluid) in the arteries, in vast ‘plasma’ clouds in space (Thunderbolts project) and in weather patterns in our atmosphere (Birkeland currents). These currents are self organising along the lines of a least energy principle and highly efficient, losing very little energy.
So what is electric current?
Forget about electrons for minute and imagine all manner of field turbulence within a battery as various chemicals react. All this activity amounts to a sort of electromagnetic ‘pressure’ the energy wants to go somewhere.
Now attach an insulated wire and an inviting conduit has appeared. The turbulence enters the wire and begins to self-organise according to local conditions. A helical electrical component forms down the conductor and a magnetic ring vortex proceeds down the insulating sheath.
When these formations get to the light bulb, the ambient conditions have changed and are less favourable for the maintenance of the structures that have been hitherto so stable. The lack of a proper insulator and a less conductive filament cause the whole structure to break up and reformat as an altered field geometry, releasing, in the process field structures that are interpreted as ‘energy’.
Some structures are transmuted directly to photons and emitted with a characteristic spectrum whilst others are caught up in existing field vortices and will manifest as ‘heat’ (vortex gains and losses) . Some of the energy in ‘hot’ vortices will reconfigure as infra-red photons and fly away at the speed of light.
Note that within this formulation, there is no transmutation from electron to charge to force to matter and back to energy as all of energy, charge and matter are really the same thing, namely field activity.
The video comments explained
Transformers
There is no need for electrons here. Field movement travels along a wire and its surroundings. Vortices are discharged from the wire and travel towards the receptor coils. They enter these structures and begin to self-organise in a way that is encouraged by the geometry of the coil.
Movement is intrinsic to electromagnetic fields and somehow a ‘current’ is formed.
The structure of the field in between the transformer coils is almost irrelevant as the flow will reformat within the wires anyhow. This ‘must’ happen as the laws of physics must be obeyed and the current must flow according to local conditions.
Think of pumping water into a hose pipe and waving it around. Whatever the state of the water when it was outside the hose and whatever the nature of the waving, the water will form its own flow profile and can really only go one way or the other along the pipe.
Mainstream physics will talk about the field ‘inducing’ a current in the receptor coil but here the field literally flows from one wire to another. Like water.
How do fields carry energy?
They are not static but literally flow from one place to another.
Watch videos of ring vortices in water to see that they can clearly carry a lot of energy. Similarly a magnetic ring vortex will contain a great deal of electrical energy and this will be made available for use at the other end of the wire somehow.
So it is very likely true that the bulk of the energy is carried in the insulating cable as speculated in the video.
The imagining that fields are either static or vibrating entities does make it hard to consider that energy is transmitted and that it is transmitted in one direction only but the image of a moving ring vortex is surely compelling.
Electrons cannot possibly flow..
No, but field vortices can as they are not tied to ‘matter’.
Undersea cables
The magnetic ring vortices are an integral part of the flow geometry and they perform better in an insulator.
A big current needs a big cable and proportionally sized vortices are required which means a proportionally larger insulator sheath. It is no good just spraying a coating on the cables. The insulation isn’t to stop the electrons falling out but to allow an enclosing vortex structure to form which lends stability and efficiency to the whole flow.
A thin insulator does not allow the rings to form properly; the signal is distorted and the flow starts to break up and dissipate into the salt water.
“If the electron flow is in two directions (alternating current) then why does the energy only flow in one direction?“
Conjecture: With alternating current, the ring vortices are of alternating polarity (direction of spin) but still travel in the same direction. The electrical field vector within the conductor alternates between the forward and backward direction but this is not where most of the energy is held.
The ‘energy’ is contained within the vortex and not in it’s speed of travel or the direction of spin. For most purposes, the transmutation from vortex to energy is a crude breakdown of structure and is agnostic of the spin direction.
Why is alternating current more efficient than direct current?
It is claimed that this is because it is transmitted at a higher voltage and that this voltage is created via transformers.
Guessing now: The transformer somehow translates between a high rate of low energy vortices to a lower rate of high energy vortices. Energy transmission is related to energy content of vortices whilst energy loss is proportional to the number of vortices. Energy loss is via ‘surface loss’ from the rings.
The idea of ‘voltage’ is of limited use here.
What is discharge?
Several mentions of discharge (of electrons) or charge loss are made along with suggestions of field induction (the capacity to move electrons); these are a standard part of the vocabulary of physics and electronics and all no doubt have different laws to help quantify their behaviour.
Within the framework of vortex physics, however, these are all the same phenomenon, that is to say, the movement of field vortices:
Discharge: Field Vortices going where you don’t want them to.
Charge loss: Electrons cannot disappear completely but vortex structures can
Induction: The change in geometry of a field structure caused by a different conductive environment
None of this has anything to do with electrons.
The second video
The diagram shows part of the experimental setup. A battery (capacitor) is placed in a circuit with a light bulb and the connecting wires stretch out to a distance of one light-second (actually much smaller) so that the ‘electricity’ is assumed to take one second to complete the circuit. See here: How electricity actually works
When the experiment is performed and the current switched on however the light comes on almost immediately and at least much sooner than it takes for light to travel around the circuit.
How does this happen? The answer is that as field currents start to flow around the circuit they will discharge into the air around the wire and form a de facto field which expands radially at close to the speed of light and eventually impacts upon the supply wire to the light bulb.
These vortices enter the wire and start to flow according to local conditions thereby creating a ‘current’, that is to say a structured flow of field energy.
Several commentators remark that this current should be infinitesimal, however, it turns out in practice to be strong enough to illuminate the light bulb.
What is going on?
Envisaged by classical physics is an electric field such as illustrated here, possibly coupled with an accompanying magnetic field that similarly decreases in field strength in proportion to the distance from the wire. The impact on the receiver wire will be small.
As soon as this field impacts the wire however, current flows in the wire and produces its own field which starts to interact with the first (transmitter) . Ignore the plus and minus signs here, the point is that the two fields are interacting over a region that is much larger than just the second wire itself.
Consider then this possibility: The electric and magnetic fields together form a helical vortex structure around the wires with the transmitter forming an outward spiralling vortex and the receiver hosting an inward spiral.
Energy then flows from one vortex to the other, the inverse square law is not appropriate and sufficient energy flows to light the bulb.
The vortex from the transmitter expands at close to the speed of light and impacts the second conduit. The current is small first but it creates its own vortex which expands at a similar speed, harvesting more and more energy as it does so.
At first, a doubled radius of the second vortex means a rough doubling of the energy gathered and hence a doubling of the current formed.
A unified field forms with a simplified form shown right and the rate of energy transport from one wire to another is .. anybody’s guess! It is likely that ultimately the ‘induced’ current drops off in approximate proportion to the distance purely on geometrical grounds.
This isn’t a ‘law’ though but a general principle as what is measured is always some sort of average which has been interpreted via a measuring instrument.
Note the contra-rotation of the helical fields and imagine what this would look like when extended over the whole circuit. The rotation is always the same way with respect to the current. An extended ring vortex is formed around the whole circuit and this is already known to be a highly stable structure.
Ok, now consider the screenshot from the second video:
The green line shows the current in the transmitter wire and the yellow shows the current in the receiver. The green arrow points to the time that the switch was turned on; the transmitter current shoots up almost instantaneously.
The current in the receiver though shows a linear increase up to the point of the yellow arrow which represents the current that initially drives the light bulb. Thereafter there is a sharp increase as the current completes the entire circuit.
The vortex model can be said to predict the linear increase but the classical model cannot. What would be expected by established theory is a sudden but ‘infinitesimal’ current which would then remain stable at a very low point.
Conduction within nerves
If conduction within wires is largely by means of ring vortices then maybe the same is for the transmission of nerve impulses?
Many papers find that there is a relationship between the speed of propagation of nerve impulses and the thickness of the insulating sheath surrounding a nerve; the thicker the myelin sheath, the faster the signal propagation:
Local modulation of neurofilament phosphorylation, axonal caliber, and slow axonal transport by myelinating Schwann cells – de Waegh, Brady https://pubmed.ncbi.nlm.nih.gov/1371237/
As with the undersea cables, a thicker sheath allows the free and unconstrained development of ring vortices whilst a thin sheath necessitates a deformation or stretching of the vortex to fit within the sheath, Additional surface area means additional ‘field drag’ (also known as friction) which leads to energy loss and slower propagation.
Summary
Electricity is the continuous flow and transmutation of energy fields from one environment to another. The geometry and conductive properties of that environment in conjunction with the principles of vortex physics characterise the flow.
Within this landscape, various patterns emerge and the simplest of these such as voltage, current and field ‘induction’ have been adopted as standard but none of them have a particularly sound basis in reality, being largely artefacts of the measuring instruments themselves.
Other patterns such as ring or helical vortices on the other hand are theoretical constructs that have not been measured and yet give a greater and more consistent understanding of the phenomena that we actually observe.
Local modulation of neurofilament phosphorylation, axonal caliber, and slow axonal transport by myelinating Schwann cells – de Waegh, Brady https://pubmed.ncbi.nlm.nih.gov/1371237/
The sun is the centre of a giant cosmic energy vortex. Field energy spirals inwards and condenses at the centre to produce the hydrogen ball described by astronomers. All energy is supplied from the outside and input energy equals output energy. The nuclear reactor described by mainstream science leads to too many anomalies.
Several odd but well documented characteristics of the sun need explanation:
The sun maintains a spherical shape despite rapid rotation
The hottest part of the sun is outside of the sun (coronal heating problem)
Comets can accelerate away from the sun
There is no bow wave as the sun moves through space
Solar output has been fairly constant over several billion years
The atomic composition is consistent with transmutation of elements
Hypothesis
The sun is not to be considered as a stand-alone object that is burning its own fuel and radiating the resulting energy into space, but as part of a much larger system of energy taking on the form of a vortex and with the sun at the centre as an integral part of that vortex.
Electromagnetic field energy spirals inwards towards the sun where it can materialise as matter, first as electrons and then hydrogen, with further elements produced by transmutation.
Much of the energy will be manifest as photons or other energetic particles which will then radiate outwards as ‘sunlight’. A constant supply of energy means that the sun is not getting any bigger or smaller on average, thereby providing a stable platform for life to evolve and flourish over the millennia.
Vortex physics
The diagram, taken from Konstantin Meyl’s book Vortex Physics shows the general structure of a vortex whether it be composed of water, air or an electromagnetic field. Intensity is zero at the centre, increases to a maximum at the vortex radius and thereafter drops off towards infinity.
Shown is how the velocity of, for example air in a tornado, will vary according to distance from the vortex centre. Rotational movement within the vortex radius is described by Meyl as increasing linearly as shown; this is identical to the rotation of a solid body.
Compare this with this description of the sun: “The radiative interior exhibits solid-body rotation” – Wikipedia
Cosmologists have simply missed the idea that energy can come from outside the sun and as a consequence imagine that all the complex behaviour they see around the sun comes not from local causes but from complex activity inside of the star itself. ‘Stellar dynamos’ are thought to exist which wind up some electrical energy and beam it outside of the sun:
“The geometry and width of the tachocline are thought to play an important role in models of the stellar dynamos by winding up the weaker poloidal field to create a much stronger toroidal field” – Wikipedia
Maybe, but how did these dynamos come into existence and what powers them? Each answer begs another question.
The laws of electromagnetism favour the appearance of toroidal vortex structures at all scales from that of the electron up to that of an entire galaxy and this is all the explanation that is needed to understand the general structure of the cosmos.
The Coronal Heating Problem
“The coronal heating problem in solar physics relates to the question of why the temperature of the Sun’s corona is millions of kelvins greater than the thousands of kelvins of the surface” – Wikipedia
This is simply not surprising from the point of view of vortex physics; they are describing a ‘heat vortex’. Energy is spiralling inwards from the solar system and continues to intensify until its inward movement is balanced by the outward radial ‘pressure’ of the energy from the centre.
A vortex radius is formed and is measured as a heat-energy maximum.
“The high temperatures require energy to be carried from the solar interior to the corona by non-thermal processes, because the second law of thermodynamics prevents heat from flowing directly from the solar photosphere (surface), which is at about 5800K, to the much hotter corona at about 1 to 3 MK (parts of the corona can even reach 10MK).” – Wikipedia
This is the problem then. The second law of thermodynamics is said to ‘prevent’ heat flowing from cold to hot, but everywhere in vortex systems we see precisely the opposite, i.e. an organised flow of energy or matter from low intensity to high.
Consider the huge amounts of energy present at the centre of a hurricane. The spiral activity begins many miles from the centre of the structure itself, moving slowly at first but increasing in speed as the radius of movement decreases. The wind reaches its maximum velocity at the visible vortex radius, the ‘wall’ of the hurricane.
The energy for the spiral did not ‘build up’ all of its own accord. It did not come out of nothing, it was guided inwards from the environs by the vortex structure. We are not seeing the usual radiative (dissipative) flow of energy mandated by the laws of thermodynamics but an inward moving compression of energy.
The heliosphere
The image below left is the expected shape of the sun’s heliosphere as it moves through space. The sun is imagined as a solid body that cleaves its way through a cluttered medium forming a distinctive bow wave of space debris and having a heliosphere that it not spherical but deformed by some sort of cosmic viscosity.
The image on the right shows the model derived from measurements. The heliosphere is nearly spherical without deformity and the whole moves with space instead of through it. The sun is not pushing its way through the interstellar medium but flowing along with it or even pulled by it.
This is because the whole vortex system is not separate from the interstellar medium but instead arises from movements within it it. It is one and the same thing as space itself and moves in complete harmony with it.
The heliosphere is not maintained by the sun but the sun by the heliosphere and the heliosphere by its own wider environment.
The solar constant is a measure of the power output of the sun, the irradiance. Naively one might expect that since the sun was created billions of years ago and has been burning up ever since, that the irradiance would diminish over time. However:
” The solar constant is an average of a varying value. In the past 400 years it has varied less than 0.2 percent. Billions of years ago, it was significantly lower.” – Wikipedia
So the irradiance is increasing over time. This is open to interpretation but is certainly not inconsistent with the idea that the sun is receiving energy from the cosmos and then radiating it back out to the solar system.
‘Oumuamua
‘Oumuamua is an odd comet-like object that accelerated towards the sun, looped around it and then accelerated away again. In other words its orbit was not entirely governed by conventional gravitational laws. Hypotheses have been put forward as to how this might happen but the matter is far from settled:
“Further, it exhibited non‑gravitational acceleration, potentially due to outgassing or a push from solar radiation pressure.” – Wikipedia
Comets are said to have an exceptionally low density of about half a gram per cubic centimetre. One possibility then is that comets are largely electrical phenomena whose movements are governed more by the ambient electromagnetic field than the ‘gravitational’.
An electrical vortex itself moving in the strong vortex field near the sun is capable of quite complex behaviour. It is quite conceivable that such an entity could interact with the ambient field in such a way as to accelerate away from the sun, against the gravitational gradient. It is even conceivable that it could absorb energy from such a field in order to power itself and even to transmute some of that energy to matter and thus expand in size.
Maintenance of spherical aspect
“The Sun is a near-perfect sphere with an oblateness estimated at 9 millionths, which means that its polar diameter differs from its equatorial diameter by only 10 kilometres” – Wikipedia
Wow! This is not just a spinning mass of gas held in place by gravity. It just isn’t. The sun is largely an electrical phenomenon and is shaped by electromagnetic forces.
If the sun were a spinning mass of hydrogen then we would expect to see a large swelling at the equator much as is the case in the rather solid Earth and other planets.
What we are seeing is the small centre of a much larger vortex structure; the heliosphere. Electromagnetic field activity spirals inwards and organises itself into a turbulent sphere of field energy.
Energy is concentrated at the centre and it is here that we can expect some of that energy to materialise as electrons or hydrogen ions. Maybe the small bulge at the equator is indicative of the amount of matter that is created; in other words the ‘mass’ of the sun.
Transmutation of elements
Wikipedia gives the elemental composition of the sun as follows:
Hydrogen: 73.46%
Helium: 24.85%
Oxygen: 0.77%
Carbon: 0.29%
Iron: 0.16%
Neon: 0.12%
Nitrogen: 0.09%
Silicon: 0.07%
Magnesium: 0.05%
Sulphur: 0.04%
So there is an abundance of the most simplest elements and the others are largely those elements described by Louis Kervran as being capable of transmutation (even within biological systems) from the simpler ones.
The simplest interpretation then is that energy enters the Sun and the field conditions thus created are propitious for first, the materialisation into the simplest form of matter and second for the transmutation of these elements into more complex atoms.
There is therefore no need to assume a Big Bang in which all matter was created at once, rather matter is in a continual cycle of creation and transmutation, sometimes to other elements and sometimes back to photons to be radiated outwards into the solar system as light.
The rotation rate of the Earth is not constant and sometimes varies over the course of a day:
“Not only the minimums of the Earth’s rotation show connections with the solar activity period, but also, as Currie (1973) showed, the rotation rate of the Earth actually correlates with the solar activity!” – Attila Grandpierre
Grandpierre notes that sometimes the change in solar activity comes first and at others it is the Earth’s variations that seem to initiate activity in the sun!
“Trying to understand what do these coincidences mean, it is important to note, that within the third time-range of coincidences in 1969-1971, at first the Earth produced the jump in 1969 (Le Mouel, Gire, Madden, 1985), and the Sun followed it only afterwards, in late 1971! In a time-linear causal sequence this circumstance would involve that the Earth was more sensitive to the global conditions of solar system at that time, and that the core changes of the Earth induced changes in the solar core! This circumstance points to a mutuality in the core-core interactions, since it seems to be clear that at other occasions the Sun was the initiator of correspondence.”
To be considered though is that the initial cause of all these phenomena is a surge in activity of the solar energy field. Sun-Earth connections exist in the form of large electromagnetic filaments (Thunderbolts Project) which can accumulate energy from the cosmic field and transmit it to both Sun and Earth, making it seem that one or other of these bodies is the origin of the effect, depending upon where the effect is first observed.
Slight digression
The idea of matter and thence mass appeared quite early on in scientific thought and were followed closely by the idea of a gravitational force that emanated from the observed matter and exerted an effect upon distant masses.
So matter is considered primal in terms of causality; it is matter that gives rise to force fields and not the other way around. Things that are visible and tangible are given a special place in this ideology despite all evidence to the contrary.
This way of thinking has is not approved of either by Einstein or even Newton himself, both decrying the idea of action at a distance as not worthy of consideration. However, despite advocating field physics in the form of General Relativity, scientists still persist in thinking according to the old patterns, often whilst pretending otherwise.
Gravity
So the sun is at the centre of a cosmic (electromagnetic) field vortex which spirals inwards and gives rise to the warm shiny ‘object’ we see.
Purely geometric considerations mean that the field strength varies with the inverse square of the distance from the vortex centre, becoming stronger nearer to that centre.
Note that the inverse square law of gravity is always expressed as weakening as the distance increases which only serves to give the (erroneous) impression that the field is somehow ‘broadcast’ outwards from the massive object, that some insubstantial energy is radiating i.e. moving away from the source.
Now according to Meyl (Scalar Waves: A First Tesla Physics Handbook) a strengthening field gives rise to a contraction of matter and hence a smaller distance (rulers literally shrink) and a weakening field gives the opposite: Tamarack mines experiment. A planet orbiting the sun then will expand slightly on the night-time side and distances here are slightly longer.
The size of the Earth on its outer side then is larger in accordance with a square law and a circular orbit is therefore the default mode of movement. This has nothing to do with any sort of force acting at a distance. This is like trying to drive a car whose wheels on the left have been made slightly larger than the ones on the right. A steering wheel is not necessary to drive in a circle.
This arrangement will behave, because of the inverse square aspect, in a similar fashion to the assumed gravitational force of Newton although nothing of the sort is going on. Comets are therefore free to move according to local field conditions and are not so constrained in their paths as previously imagined. In the case of the Earth we can add some velocity and inertia to recover the elliptic orbit we are familiar with.
Superimposed upon this system are variations caused by genuine gravitational effects produced by the Earth itself and whatever ‘matter’ there is in the Sun. Attempts to calculate the mass of the sun from the perceived effects of its gravitational field are therefore fraught with risk as most of those effects are not in fact gravitational in nature.
A genuine gravitational effect does exist and is undoubtedly what we measure at the surface of the Earth, but according to the above, this is just not the same mechanism as happens with respect to the Earth moving in the field of the Sun. They have similar measurable effects and the presence of the inverse square effect gives the impression that they are the same phenomenon.
The Sun’s field is not the same as the Earth’s; it isn’t ‘gravity’!
The science of physics has a good quantitative agreement with a wide range of experimental data, but as noted by David Bohm, falls short of an even half-comprehensible description of the nature of reality.
This page lists but a few quirks, anomalies and shortfalls of contemporary physics, some of which have bothered me for years and others that have only recently become apparent. All of these are clarified by the Theory of Objectivity from Konstantin Meyl in his book: “Scalar waves: a first Tesla physics textbook”.
The kinetic theory of gases
Quantum wave function nonsense
Covalent bonding
Too many ‘stuffs’ and fundamental constants
What is ‘heat’?
What is ‘time’?
Proton radius puzzle
Avogadro’s number
Electric charge is a redundant concept
‘Mass’ and ‘energy’ are not fundamental to physics
The kinetic theory of gases
Mainstream science posits that the molecules in a gas are whizzing around all over the place and bouncing off each other and that this is responsible for the phenomena of Heat and Pressure. The vibration of specks of dust in sunlight (Brownian motion) is said to be caused by this.
Molecules are conceived as having finite size, hard boundaries and bounciness. They move around in an otherwise empty vacuum and can somehow transmute energy from infra-red radiation to kinetic energy and back again. None of this is true as usual but physicists prefer to not think about it too much.
Heat (temperature) and pressure are described as almost synonymous with speed of motion of molecules and equations from Einstein give some credence to this.
Ok, but how do molecules fly around in a liquid such as water and what about solids? Steel or quartz for example will certainly get hot but the molecules are not flying around but maintain a regular lattice structure. How is temperature then independent of pressure?
If I hold a cup of hot coffee, the molecules bounce around the liquid, make the cup molecules vibrate and then somehow convert the vibrations to infra red radiation so that the heat can be felt at a distance. The heat makes the molecules in my hand whizz around but not so much that it disrupts cellular activity.
I think this is nonsense and Konstantin Meyl and Gerald Pollack at least are in agreement with this.
This is important, as descriptions of DNA construction for example rely upon the random movement of molecules to drive the replication process and if none of this is true then they need to rethink. Similar concerns apply to the functioning of ion channels; diagrams and animations show molecules randomly hurling themselves at the channels and being selected on a statistical basis to achieve the required balance. The equations seem to work but that does not mean that the mechanism is correct.
Further reading: “The Fourth Phase of Water” – Gerald Pollack
What is heat? (Meyl)
Atoms are complicated vortex structures within electromagnetic fields and the ‘vortex radius’ is what is taken to be the size of the atom.
The vortex structure theoretically extends to infinity and so the ‘space’ between atoms is not empty but consists of an extended field structure which serves to keep atoms at a distance from each other and is the basis of ‘pressure’.
A photon of light or infra-red is a sort of linear vortex that can roll up into a spherical vortex at any time or become absorbed into an existing atomic vortex. In this way, gases, liquids and solids can absorb arbitrary amounts of energy, there is no ‘quantum’ of energy needed.
As vortices increase their energy they can expand meaning the liquid or solid may also expand owing to increased repulsive forces. This stored energy is latent heat. Heat transfer is by vortex gains and losses as field energy moves from one vortex to the next.
Vortices can ‘oscillate’, they can expand and contract rhythmically. Whole domains of the substance can vibrate in synchrony and form large areas of coherent oscillations. In gases and liquids this can give rise to Brownian motion.
This is not the same as the kinetic theory if gases where long distance movement of molecules is assumed. Vortex may account for Brownian motors if they exist but not suitable as an explanation for ion channel function or the DNA replication cartoons.
Measured temperature is ‘rate’ of heat loss and is via vortex losses. Stirring water reputedly makes it cooler which means that although there is greater energy stored, it is losing energy more slowly as the vortices somehow retain the energy on a semi-permanent basis. Again, energy is stored as a field structure and not as the kinetic energy described by mainstream physics.
Shining light into a liquid may have a similar effect if a single wavelength causes coherent oscillations on a macro scale and retains the extra energy instead of dissipating it.
High energy vortices can lose energy in the form of quanta which can unroll into photons and be measured as infra-red radiation. The point here is that matter, energy, light, vortices are all the same substance; there is no need to wonder how kinetic energy can be converted to a photon for example.
Meyl uses the term ‘heat’ to describe the amplitude of vortex oscillation and the term ‘temperature’ to refer to the frequency of oscillation. The two are quite different.
A spinning vortex can absorb a photon of a particular frequency but emit one of a different frequency (and energy), thereby acting as a transducer. This is used in biological systems according to quantum physicists.
Further reading: “Scalar waves: a first Tesla physics textbook” – Konstantin Meyl
Covalent bonding
Shown is the covalent bonding between Carbon and Hydrogen atoms to form a larger molecule. Each atom ‘shares’ an electron with the other and those electrons share a common orbit pair.
A electron is claimed to be a small particle that orbits a nucleus according to an attractive force pulling it inwards balanced by centrifugal force pushing it outwards, much like planets and gravity.
This may seem to be plausible until you try to imagine the electrons ‘orbiting’. How do they move? What keeps them to their orbits? How can an electron be in two orbits at the same time? The elephant in the room: In what way does a shared orbit constitute a ‘bond’? – How is a mutually attractive force created in this way?
A supposed explanation arises from quantum mechanics whereby an electron has no defined position, does not move around an orbit and isn’t really a particle.
An electron ‘cloud’ exists and part of it is shared with another atom but it isn’t ‘real’ and is only a distribution of probabilities until measured. But of this is so then again: How is an attractive force generated from a theoretical probability field?
The model proposed by Konstantin Meyl is much simpler, requires no shared orbits, weak atomic forces or quantum probability fields, Atoms are described by electromagnetic vortices having enclosing electron shells of spinning electric fields. The spinning field creates a magnetic dipole and atoms will stick together attracted by their dipole fields. The negatively charged electron fields will repel each other to maintain a distance between the atoms and give them a characteristic size and ‘shape’. Atomic structure: MeylThe atom
No forces are needed apart from electromagnetism and Meyl makes quantitative predictions about the size of atoms which agree with experimental evidence.
The Second Law of Thermodynamics
The laws of thermodynamics are based upon the notion that atoms are a bit like billiard balls, bouncing around all over the place at random and that, left to themselves, they will eventually spread out all over the universe in a uniform ‘heat death’ with no real energy left and no possibility of organising themselves into a coherent structure.
They are, at best, hypotheses and not laws by any stretch of the imagination. They are in any case refuted by the field equations of Konstantin Meyl and by observations of actual reality.
“One of the simplest (formulations of the second law) is the Clausius statement, that heat does not spontaneously pass from a colder to a hotter body.” – Wikipedia
Meyl proposes a simple experiment whereby two metal spheres are placed near to each other, one being warm and one being hot. A parabolic mirror behind the warm sphere will focus the radiated heat towards the hotter sphere which immediately refutes the postulate. Random heat fluctuations have become directed and structured by a simple geometric arrangement of matter.
Complaints will be made that this arrangement is somehow ‘cheating’ but it seems allowed by the definitions from Wikipedia:
“Heat can never pass from a colder to a warmer body without some other change, connected therewith, occurring at the same time.” – Clausius
“It is impossible for a self-acting machine, unaided by any external agency, to convey heat from one body to another at a higher temperature.” – Kelvin
Entropy
The second law is closely related to the concept of entropy as a measure of disorder which is said to increase with time, with the Universe becoming increasingly disordered and chaotic as time proceeds. If this were to be true then it would mean that if we look back in time, the Universe would appear more and more orderly, with everything perfectly arranged somehow just after the big bang and slowly deteriorating ever since.
This is contradicted by mainstream theory that sees galaxies and stars constructed out of almost nothing and life emerging from ‘soup’. Living things become more complex whilst the orbits of the planets synchronise via the phenomenon of resonance.
The Laws of Thermodynamics seem reasonable according to the billiard ball model so the observations we make should lead us to question that model. The Newtonian view of the Universe is essentially one of a flat featureless space inhabited by rather dull objects whose main interaction is via the radial forces of gravity. It is hard to imagine matter organising itself under these circumstances.
So the model is wrong and the theories of Meyl should be considered. Space is filled with a living and energetic field which has ‘movement’ built into it. Basic matter consists of forces that are ‘moving’, ‘spinning’ and are long range attractive but short range repulsive. Magnetic type forces and electrical forces act at right angles to each other and the natural and inevitable result is the formation of complex vortex structures with built in ‘energy’.
With this being the fundamental fabric of Reality there is little chance of anything fizzling out to any sort of heat death, the main characteristics are going to be continual cycles of movement, change, creation, organisation and reorganisation.
All movement is controlled by the Laws of Physics and there is no ‘randomness’ here, no true ‘disorder’.
Scientists tried measuring the radius of the proton but get a different answer depending upon which element or isotope they use as the source of the protons. One obvious inference is that the proton is a different size within each element.
Konstantin Meyl has a model of an atom which is a bit like a bunch of nested soap bubbles (electron shells). The outer bubble is always the same size (fixed by the speed of light) and everything else squashes up to fit within the nesting arrangement. So in particular, a proton will shrink according to the number of electron shells or the type of particle eg muon) that sits in there adjacent to it.
Avogadro’s law
Avogadro’s law states that “equal volumes of all gases, at the same temperature and pressure, have the same number of molecules.”
Another way of writing this is “The same number of molecules of each gas has the same volume”. Or, setting the number of molecules (atoms) to one: “All atoms are the same size”.
So if the size of an atom is the radius of the outer electron shell, then everything else must be squashed up inside. This is consistent with the atomic model of Konstantin Meyl: Atomic structure: Meyl
The idea of electrical charge is surplus to requirements as far as the field physics of Konstantin Meyl is concerned. It is only a theoretical ‘convenience’ in classical physics and even there is an unnecessary distraction as to what is actually happening.
Objects possessing an electrical charge are said to attract or repel other charged objects by virtue of a static electrical field emanating from that charge which in turn affects the charge on the other objects. The charges are attracted and somehow drag their host ‘matter’ along with them. See: Static electricity
We know that charge exists by the field it creates and we can measure the strength of that field by the effect it has on other charges. So we don’t measure the charge directly, only via the field it has ‘created’. We don’t ever see the charge or the creation of the field.
The effect of one charged particle on another is only ever observed when the particles are a distance apart so that the two charges never interact with each other directly and always use a field to somehow transmit the effect over a distance.
A particle, charged or otherwise is never really acting in accordance with another distant object but only as a consequence of local field conditions. Moreover, the substance of the ‘object’ itself does not seem to be of any relevance to any of these interactions with ‘matter’ itself seeming to exist merely as a vehicle for ‘charge,’ with ‘charge’ acting as a sort of intermediary to justify the presence of a field.
This is all starting to sound very circular and we should be starting to think that if charge and matter don’t really do anything within our theoretical framework then they shouldn’t really be there at all and all that is required is some rules for an electromagnetic field theory.
The whole thing is sounding like ‘sticky-plaster’ science whereby, one by one, concepts have been added to an existing framework as and when required, or as is fashionable.
How did this come about?
‘Charge’, as a ‘property’ of the familiar ‘matter’ was sufficiently ‘matter-like’ for the existing materialists to stomach and certainly something is needed to explain the observed effects. This new ‘property’ of matter (charge) seems to have effects at remote distances so it must somehow be responsible for creating a force-field.
Better to have ditched the idea of ‘matter’ and gone straight for a field theory.
Electrical torsion fields stabilise into spherical vortices which have the impression of solidity via their stability and propensity to bounce off each other owing to the repulsive forces generated by a field-negative vortex radius.
Mass
The concept of ‘mass’ is similarly redundant and attempts to define it just result in a confused mess. Newtonian physics has three types of mass which are all the same somehow, whilst relativity tries to define mass as the degree to which space is bent by an object but also at the same time as the degree to which an object will accelerate when placed in space that was bent by the some other object.
Mass is not coincident with matter but a property of it somehow in Newtonian physics and in relativity it is some complex interaction between objects and ‘space’ which is itself somewhat undefined.
Mass cannot be measured directly but can only be calculated from it’s imagined gravitational effect on other masses or from the gravitational effects exerted upon it by those other masses.
Mass is therefore a theoretical construct derived from observations and measurements made concerning other quantities. The concept of a ‘field’ is necessary to explain the behaviour of objects in space but not mass or charge.
The idea that there is something that is radiating gravity out into space is fanciful nonsense and at odds with observations. See: The nature of gravityNewton’s gravity
Energy
The idea of energy is even more confused. There are units to quantify it but, as with charge and mass, there is no way of measuring it directly and it must be calculated from other quantities.
The principle of conservation of energy lies at the heart of physics for many: “The law of conservation of energy states that the total energy of an isolated system remains constant; it is said to be conserved over time” – Wikipedia
But according to other physicists there is no such thing as ‘absolute’ energy and nor is it conserved: [video].
To see this in a simple way, imagine you are out in space and I fly past in a spaceship. I am travelling fast relative to you and so have great kinetic energy. Now imagine you turn on your rocket engines and catch up with me We are now stationary relative to each other, stationary relative to local ‘space’ and therefore have no kinetic energy – so where did the energy go to?
Something which has no absolute value surely cannot be ‘conserved’ in any sensible way.
Potential energy
Energy is said to be of different ‘types’: kinetic, potential, gravitational, heat, chemical electrical etc. It is said to transmute from one sort to another and be conserved along the way, but what is it that is ‘conserved’ exactly? If energy is always of one specific ‘type’ or another then what exactly is ‘energy’ itself? Why is the same word used for lots of different things?
If I hold an apple 10 feet above the ground it will have a certain quantity of potential energy and if I move it to 100 feet above the ground it will have considerably more potential energy. However, if I now move it all the way to the neutral point between the Earth and the moon it suddenly has no potential energy at all – so again, where did the energy go?
The concept of energy is a useful tool under specific circumstances but it is not absolute, not conserved, not directly measurable. It is not ‘a thing’ and therefore cannot be considered fundamental in any sense.
Photon absorption
Heat transfer is said to happen via various mechanisms including the following: A photon of light will fly close to an atom and the energy will be absorbed by an orbiting electron. The electron will make a discrete jump from one orbital to another with the difference in energy levels matching the energy of the original photon. The photon disappears from the universe.
Questions:
How close does the photon need to go and why?
Where does the photon go to after being absorbed?
How is one form of energy converted to another? What is the mechanism?
What if the photon has too much energy for the orbit? Where does this extra energy go to?
What if the energy of a photon does not precisely match any electron orbital? Does it simply pass through the material?
What does it mean to say a photon has ‘energy’?
What laws of physics control this procedure?
Electromagnetic radiation such as light is described by a wave equation and that is all. There is nothing in the equation to say how this wave turns into the ‘velocity’ of a nearby electron or how the wave itself might disappear. Similarly the orbital of an electron is described in terms of atomic forces (or now in terms of a probability cloud) and there is nothing in these laws that say how a probability cloud can be enhanced by, or absorb, a portion of electromagnetic field.
If I drop an apple to the ground, it does not deplete, or cause to vanish, the gravitational field that drew it there; so why should an electron cause a magnetic field to vanish? Where are the laws of physics that describe this?
This phenomenon and others are always described in terms of energy transfer instead of the basic equations of electromagnetism or gravity.
What is Energy?
We therefore have a physics that consists of a set of seemingly incompatible equations for electromagnetism, gravity, atomic forces etc. each providing a ‘law’ for a different physical ‘stuff’ and these are all somehow glued together by the concept of ‘energy’ and the conservation of energy. A set of fundamental constants allows for the theoretical conversion of different measurement units but there seems to be no description of how different energies are converted to each other or even what ‘energy’ consists of.
How does a magnet lift a weight off the Earth? Do we use the laws of gravity or the laws of magnetism? Both laws are clearly in play but are separate theories with neither theory compatible with the other. The two are somehow welded together via some fundamental constant, but where is the theory describing how this fundamental constant acquired the value that it did?
Think again of letting an apple fall to the ground. The increase in speed seems reasonable as we have equations relating the force of gravity to the acceleration and mass of the apple – but try to think of this in terms of energy conservation and the math gets easier but the understanding gets harder! The apple falls by having its potential energy converted to kinetic energy! How? What process performs this magical alchemy? Are the two energies the same thing or not? Why does it need ‘converting’?
The idea of energy seems to work for practical purposes but as an aid to actually understanding what is going on it is really just a sleight of hand technique, a universal wallpaper to cover the cracks in the plaster veneer that is theoretical physics.
Quantum wave function
At the heart of quantum physics is the Schrödinger wave equation. This describes ‘matter’ as a continuous wave function in a physical ‘field’ but physicist were still stuck on the idea of matter as consisting of particles and so they interpreted a perfectly good theoretical construct accordingly.
A wave is just a wave but it seems now universallyaccepted that the wave function represents a probability function who’s value at any point is the probability of finding a particle at that particular position in space. We therefore have both particles and field quantities described by the same construct. Fair enough, maybe, as long as it is recognised that this is just a theoretical construct.
However, it seems to be commonly accepted now by many people that the wave function is a ‘real’ thing and that the particle ‘exists’ as both a wave and particle at the same time, that it exists in several places at once and that it is brought into physical being by a physical ‘collapse’ of such a function. We even have people saying that the act of collapsing is instrumental in the creation of consciousness.
And all this this despite the obvious facts that:
Nobody has directly measured such a function and nor can they can ever do so
The mechanism of collapse is not described
Such a collapse would violate the principle of causality (spooky action at a distance)
It is not related what is meant by something being two different things at once, existing in two different places at the same time or being alive and dead at the same time. This is just linguistic trickery.
There is no description of how a probability function turns into ‘matter’
A probability function is a mathematical construct and was never posited to correspond to anything real, it was just a means of describing the aggregated output of multiple events. The quantum physicists have put the idea of randomness at the heart of physics. They have, without justification, created the notion of a random process at the heart of reality, thereby destroying any hope of a deterministic description of the universe.
‘Randomness’ in mathematics is a description of an outcome, not the means of generating that outcome. However this is what it seems we are asked to believe, that a completely fictitious process with no defined mechanism and unfettered by any sensible or realistic laws, is in fact at the root cause of everything that happens in the Universe.
This is surely a complete abrogation of all intellectual acuity.
We have no direct way of measuring time and the best we can do is to count the number of oscillations of an atomic clock and declare the result to be representative of elapsed time. A big problem with this is the following chart taken from Meyl’s book: “Scalar Waves..”, which shows that two atomic clocks in the same room but oriented differently will keep very good time with each other – except during an eclipse!
We do not therefore have a direct way of measuring time independently from all the other variables of physics such as length, energy frequency etc. All we have of Reality is a collection of observations of instrument readings and from this we induce various quantities according to a theoretical model.
‘Time’ is no different and the exact nature this ‘entity’ will depend upon the model used to interpret the measurements.
Try to imagine that time were to ‘speed up’ and all the workings of the universe, including our perceptions were to speed up accordingly. In this case we simply would not notice what had happened and would carry on regardless. What then is the purpose of ‘time’?
Too many ‘stuffs’
The basic problem here is that there are far too many fundamental ‘stuffs’ in physics, too many basic ‘entities’ such as matter, charge, energy etc., too many different ‘forces’ (gravity, electric, magnetic..) and no single unified theory. Each of these entities needs some constants to enable integration into the system but these constants have also been declared ‘fundamental’ as they must, since they connect together ‘fundamental’ quantities.
The declaration of everything as fundamental and irreducible obviates the necessity for further research and so physics as it is currently formulated can never progress in this regard.
Thinking that mere ‘artefacts’ (theoretical constructs) of the system really represent ‘real’ entities further confuses the issue and leads to speculations that are beyond absurd.
The Theory of Objectivity from Konstantin Meyl assumes only one type of ‘stuff’ and that is a Field quantity of electromagnetic nature; everything else is an emergent property of that field and no fundamental constants are needed.
Fundamental particles are calculated as field vortices and their sizes and weights can be calculated directly with no additional information. See: Atomic structure: Meyl
Gravity is an emergent property of the field and so are mass and charge; there is no need for these extra concepts and no need for hand-waving arguments to show how the one might affect the other.
The case of the photon lifting an electron to a higher orbit is a good example. A photon is a ripple in the field structure but so is an electron or a whole atom a more complex vortex structure in the same medium. Both obey the same laws of physics.
Imagine a ripple in a river encountering a whirlpool and becoming absorbed by it. The whole activity happens according to the laws of fluid dynamics and there is no need to suppose an intermediary of ‘energy’ conservation to explain the phenomenon. There is no transmutation between electricity, energy and matter as everything is made of the same ‘stuff’; all we witness is water behaving according to the fundamental laws of water.
In this case, nobody imagines that the ripple is made of a different fundamental substance to the whirlpool and nobody seeks to add a new law to the lexicon of physics. It is acknowledged that the laws of fluids should either suffice or be discarded.
A larger, more vigorous whirlpool will have a greater effect, a greater persistence and this can be quantified (simplified) as ‘energy’ but that doesn’t mean that there is a separate substance called ‘energy’ or that it is conserved. A vortex has a certain identity’ or character of its own and will demonstrate distinctive and repeatable behaviour which can no doubt be studied in its own right, but it is still made of water and must ultimately obey the same laws as the rest of the river.
Certain behaviours will appear random and difficult to predict but that doesn’t mean that there exists a fundamental uncertainty concerning rivers and there is absolutely no need to invent an infinite number of alternative rivers in order to ‘explain away’ any constants that might arise as simple artefacts of the theory.
The image that most people have of an atom is similar to that shown here with tiny electrons whizzing around a larger nucleus comprising an assortment of protons and neutrons. This makes it easy to imagine many of the things we are asked to believe about atoms. However, the atom does not look like this even according to mainstream science!
We can imagine electrons becoming detached from an atom and flowing along a wire to become an electric current or we can picture one of them dropping from a higher orbit to a lower one to release a photon, but the atom does not look like this and an electron is not as depicted in the diagram and so all our imaginings are in vain.
The Electron Cloud model
Science articles and YouTube lectures by physicists all depict something like the images below representing electron clouds or atomic orbitals.
Each diagram is supposed to represent the probability of finding an electron at a particular place, with darker shading representing a higher probability.
These are the electron orbitals but the electrons do not actually move along an orbital path, all we know is that we might measure one somewhere if we use the correct instruments.
So electrons:
Have an orbital but do not orbit
Have a property called ‘spin’ but they do not spin
Have angular momentum which appears to be unrelated to spin
Have mass and charge (fictional qualities anyhow)
Have a position which is only specified by a probability function
Have energy and linear momentum
Recommended is to watch this short video from the Science Asylum to see how mind bending the whole thing is even for the commentator himself.
This is certainly confusing, with the Wikipedia entry on Electron Clouds even proclaiming “The electron cloud is not really a thing.”!
The idea of a probability cloud comes from quantum mechanics but it doesn’t really explain anything. It doesn’t explain why the cloud is this particular shape or where the electron is when it is not being measured.
Quantum mechanics seems to work fine as a theoretical construct for some applications but that does not mean that the ideas can be automatically ported across to other areas of physics or that abstract concepts such as probability functions have any meaningful interpretation in the Real World.
The new version of the atom is clearly quite different from the traditional Bohr Model and should supplant it completely but what has happened is that scientists have continued to express it in terms of existing and familiar concepts such as charge, mass, probability, energy etc. when they should have scrapped the whole lot and started from scratch.
How to resolve this?
The atom as a Field Vortex (Konstantin Meyl)
The diagrams shown below are taken from Konstantin Meyl (Potential Vortex – Volume 4). Compare with the electron clouds above.
The resemblance is quite remarkable.
These are drawings of the theoretical construct of atoms consisting of field vortices. Think of these for now as electrically charged soap bubbles. The charges on the bubbles will repel each other, leading to deformation of the usually spherical bubbles into shapes such as the ones shown.
In actuality, these apparently solid constructs are stable states of a continually moving field structure which has a tendency to form spherical vortex structures consisting of spinning electric fields with associated magnetic dipoles.
We can now resolve some of the conceptual problems associated with the electron cloud model.
Orbitals but no orbiting. Movement around the orbital is field movement as opposed to movement of a charged particle.
Spin but not spinning. Spin comes from the field and creates a magnetic dipole but no mass is moving and so there is no spinning as usually thought of.
Angular momentum. This is now unrelated to the spin as the spin is field movement and not the orbiting of a charged particle with mass.
Charge is an illusion created by the presence of a field but it is not of a particulate nature and is not necessarily associated with a mass.
Indeterminate position. The field constitutes the entire electron orbital and so the ‘position’ is distributed over the whole orbital. The imagining that there is a small charged article whizzing around has caused confusion. Probably the assumption of a particle together with the measuring technique and some unusual mathematics is what is causing this illusion. The field is not quantised until it is measured.
Fuzziness of cloud. The field is not strictly confined to an infinitely thin shell but extends beyond it with decreasing field strength thereby giving the impression of a ‘probability cloud’.
Summary
The resemblance between contemporary visualisations of electron clouds and the almost hand drawn diagrams of Konstantin Meyl is too great to ignore. There must be something of worth here.
The vortex model of Meyl needs wider publicity and the Bohr model is now a misleading fantasy. The current model of mainstream science is a horrible mash-up of classical ideas (particles, mass, charge etc.) together with the quantum weirdness of probability wave functions which have no reasonable interpretation in the physical world.
The field equations of Meyl give a self-consistent theoretical explanation for the shapes and phenomena we see and allow for quantitative predictions to be made without the need for a multiplicity of ‘properties’.
We are hearing often now that the electromagnetic force is many times greater than the gravitational force and that therefore the dominant force in nature is the electromagnetic. This assertion is then used to support the notion of the Electric Universe or even Flat Earth. The initial assertion is nonsense and so to draw conclusions from it is not valid.
Gravity and electromagnetism in classical physics are completely different entities and are expressed in different units. They are incommensurate quantities and cannot be compared to each other.
The idea of expressing them both as the same thing, a force, is a sleight of hand which makes for good practical physics but obscures the fact that they are different entities with different mechanisms.
The idea that the one is stronger than the other comes from the choice of units used and the fundamental constants that enable the transformation of the one into the other.
The Gravitational force between two objects depends upon the masses of the two objects and the distance between them and is calculated via the formula shown.
The ‘units’ of gravitational force are therefore ‘mass squared divided by distance squared multiplied by the gravitational constant G‘.
The absolute value of the force will depend upon choice of units for both mass (kilogram, pounds or ounces) and distance (metre, mile, Angstrom) and also the value of the constant G.
Electrostatic force is determined by the amount of charge on the objects together with the distance between them and a completely different constant ‘k’. The units of electrostatic attraction are therefore charge squared divided by distance squared multiplied by a constant, ‘k’.
The absolute values of each force again depends upon the choice of all units involved with the values constants k and G being chosen, not entirely arbitrarily, but with specific reference to each other in order that two different quantities with completely different mechanisms may both be expressed via the same vocabulary (‘force’) and may attain comparable values in order that physical comparisons should make sense.
A toy magnet will stick to a fridge whilst it is very close or touching but will fall to the ground due to gravity as soon as it moves even a short distance away from the metal. Try lifting a granite boulder against gravity with even a large magnet and you will fail.
The practical strength of the two fields therefore depends upon specific circumstances and the numbers used depends upon the units chosen.
The idea of a ‘force’ in physics is a clever artifice to enable comparison between two qualitatively different entities. The comparison is possible because of a shared behaviour (attraction); the construct of a force allows description of behaviour to be studied as separate from mechanism. One could go go further and say that it allows behaviour to be studied as if it were a mechanism.
The fact that this is even possible is indicative of some sort of unity between gravity and electromagnetism. It may seem natural that a magnetic force could cancel out an electrostatic or gravitational force .. but why? Nobody imagines that the colour blue for example could cancel out an E minor chord or, for that matter, an avocado pear. Frequencies do not cancel out vegetables so why should it be possible for mass and charge?
The eventual answer is that there is a unified field of an electromagnetic nature that in certain circumstances can manifest as the force we call gravity. The field is as described by Konstantin Meyl in his book ‘Scalar Waves – a first Tesla physics handbook’. meyl.eu
The confusion of electromagnetism and gravity has arisen partly because of the physical scale of human beings. If we were the size of galaxies we probably wouldn’t worry too much about electric or subatomic forces and if we were the size of an atom we would probably not care about, or even notice, the force of gravity.
The effects of gravity became apparent long before electromagnetic field theory and so the assumption of gravitational force was taken to be fundamental and as arising from something even more fundamental namely the idea of ‘matter’. Both of these however are emergent properties of the Universal Field and humanity will prefer to cling to that which is familiar for some time yet: Does gravity exist?
Summary
To say that electromagnetism stronger than gravity is equivalent to saying that charge is stronger than mass, i.e. it is nonsense.
The idea of gravity as consisting of attractive forces emanating objects with ‘mass’ is easy enough to understand but leads to problems as explained in a paper by Tom Van Flandern. Anomalies can be resolved by thinking about gravity in a slightly different way and by analogy with the flow of water in a river.
Key anomaly
The Earth is said to orbit the sun but the position of the sun is not fixed – it is displaced by a distance of over a million kilometres by the gravitational fields of the Earth and other planets. Despite this, the gravitational pull on the Earth seems to be always towards the sun at the present and never where it was a few minutes ago,
From Tom Van Flandern
Some scientists are expecting that the gravitational field of the sun will radiate out from the sun at the speed of light. It takes 8.3 minutes for the light to travel from the Sun to the Earth and so the light we see always comes from a position where the sun was 8.3 minutes ago. It is expected then that we should always experience on Earth a gravitational pull that was generated 8.3 minutes in the past.
This gravity vector travels towards us and will exert a pull towards the place from which it was created 8.3 minutes ago. This never happens and the pull is always towards the ‘present’ position of the sun thereby giving the impression that the gravitational field has travelled almost instantaneously from the sun to the Earth.
“Standard experimental techniques exist to determine the propagation speed of forces. When we apply these techniques to gravity, they all yield propagation speeds too great to measure, substantially faster than lightspeed.” – Van Flandern
Newton’s law: “Every particle attracts every other particle in the universe with a force that is proportional to the product of their masses and inversely proportional to the square of the distance between their centres. Separated objects attract and are attracted as if all their mass were concentrated at their centres.” – Wikipedia
Newton did not like this: “That one body may act upon another at a distance through a vacuum without the mediation of anything else, by and through which their action and force may be conveyed from one another, is to me so great an absurdity that, I believe, no man who has in philosophic matters a competent faculty of thinking could ever fall into it.” – Newton 1692 – Wikipedia
Tom Van Flandern: “The most amazing thing I was taught as a graduate student of celestial mechanics at Yale in the 1960s was that all gravitational interactions between bodies in all dynamical systems had to be taken as instantaneous. This seemed unacceptable.. “
Confusing, certainly, but acceptable nevertheless if viewed from a slightly different viewpoint. Gravitational interactions are not actually between bodies at all but between a body and its local gravitational field even according to Newton. The interaction is in fact instantaneous but is local rather than distant; there is no need for it to ‘travel’ Moreover, cause can be said to flow from the force to the body and not the other way around.
There is no need to suppose that the bodies ‘know’ about each other, only that they are both subject to some sort of coordinated influence that will tend to move them closer together.
There is equally no need to assume that the force is caused by either object, only that it exists and has certain properties, will form certain patterns. Superfluous assumptions lead to confusion.
Such a cause is never observed directly and nor is a distant influence of one body upon another. Neither of these assumptions is necessary to make effective predictions about how the bodies will behave.
All we really observe is two objects coming together according to certain ‘laws’. The inverse square law is easily observed but the dependence upon mass is problematic.
Mass is never observed and is only ever measured by the degree to which attraction occurs and so strictly speaking we have no such thing as ‘mass’, only observed acceleration of objects towards each other.
Objects falling to the ground will accelerate towards the Earth at a rate that is independent of their mass.
Mass is merely a computational convenience. The idea of ‘force’ likewise is a fictitious construct to mediate between gravity, inertia, electromagnetism and mechanics.
A water vortex analogy. The substance of water is analogous to an all pervasive ‘field’ in space that influences the motion of the planets and stars, In the image, right, nobody imagines that it is the vortex that is causing its own little whirlpool, rather that it is the global vortex activity that gives rise to the sink at its own centre..
Similarly, the galactic centre is not creating and directing its own spiral arms via gravity, instead all the matter in the galaxy moves according to local field forces that organise the solar systems and have a tendency spiral inwards much as the water in the whirlpool.
Similar forces organise our solar system and will concentrate energy towards the sun where it is converted to photons and ejected at the speed of light to form sunshine. The Sun will therefore never run out as it is an energy transducer rather than a big bonfire.
In the image above the vortex is happy to conform with the general flow of the river and flow hither and thither with the rest of the stream. The centre of the vortex will not usually be out of step with the main vortex as it is caused by the vortex and part of it.
Similarly our sun will not be out of step with the gravitational fields of its own planets as its movements are determined by them, It has no motive force of its own.
If the water were to encounter an obstacle such as a rock, there would be an adaptive change to the vortex shape and its internal forces and this change would in due course lead to and altered trajectory of the vortex centre. The change would take some time to have an effect and this time would depend upon the precise evolution of the vortex geometry. Effects spiral inwards.
With no external influences, the flow would move in stereotypical patterns that would, after some investigation, be amenable to scientific description, with stories of forces and inertia being sufficient to make quantitative predictions. Big vortices have a large ‘mass’ and hence ‘momentum’ and this allows them to push smaller vortices out of the way but in reality this is due to the large field forces surrounding the vortex as opposed to any innate property of the vortex centre itself.
The motion of a speck of dust on the surface of the water can be described with a radial and a tangential component and this can be interpreted as free movement (inertia) around the orbital with some sort of ‘force’ pulling the speck (mass) towards the centre of the spiral. What is observed however is motion, not forces.
Similar considerations then apply to our solar system. Space is permeated by a ‘living’ field which influences all the celestial bodies and is ultimately responsible for their movement and indeed creation. It is not the case that the stars are sending out radiative fields to pull other bodies towards them; the Universal Field instead oversees all cosmological organisation.
Is space really permeated by an infinite unseen force field? Well this is what Newtonian theory says and most people seem content to think so.
Is it really the case that a weak disembodied force can influence the movement of the sun or entire galaxy? See answer above.
If the force is not radiative then what is the inverse square law? See the image of the whirlpool. Energy spirals inwards here and forces become stronger towards the centre. The inverse square law is solely a consequence of geometry whatever the nature of the actual force
What would happen if there were an explosion on the Sun? This is similar to what would happen if a firework were tossed into a water vortex. The vortex field would be disturbed and the effect would propagate outwards from the centre at a speed we call the speed of sound in water. This is analogous to speed-of-light effects propagating out from the sun. This is not, however, the same as the sun radiating out gravity waves on a daily basis to keep the Earth in orbit.
Just because some gravitational effects propagate out from the Sun it doesn’t mean all of them do. The formulation of a gravitational field supposedly emanating from mass together with the inverse square law has caused scientists to attribute all of the effects they see as being caused by the same radiative force called gravity all coming from a centralised source.
Laws are formulated under this assumption and because they seem to make good predictions they are then accepted as some fundamental truth.
Does a Gravitational Field Continuously Regenerate, or is it “Frozen?” The field does not need to regenerate as it is not produced by anything let alone the sun. It is self-generating and operates according to its own laws with the inverse square law being a simplified observation of something that happens near planets. The field is in a constant state of ‘movement’, it contains its own ‘energy’.
What is this ‘Law’ you speak of? The field equation of Konstantin Meyl.
What is fundamental? The field equation is fundamental. This is analogous to the field equations that describe the flow in water: the Navier-Stokes equations.
What is not fundamental? Everything else; everything that happens that we actually observe. Ripples travel through a whirlpool with some consistency but this doesn’t mean that they form a separate fundamental entity called ‘photons’ or whatever; they are an emergent phenomenon dependent upon the underlying properties of water.
Two small sticks move closer together on a pond via a resonant ripple effect. They are both emitting gravity waves which attract each other? No! This is just an illusion; it is the water, the substrate, that is causal here, not an inert bit of wood. Planets do not attract each other it is just that space moves them together.
What is the field ‘like’? The field is dynamic version of Maxwell’s equations where electricity and magnetism are merely different aspects of the same thing. Constant movement of the field makes it ‘alive’ and enables propagation of emergent effects such as light. Other effects contribute to the concept of ‘energy’ which again is not fundamental but a way of expressing observations of specific patterns in field movements.
How does light propagate? As ripples moving across a pond may traverse a water vortex so is light merely a modulation of the ambient field and so will its trajectory be determined by that field. The field itself is the substrate for field modulations.
“There is only the Field” – Meyl
Ripples will follow a vortex and move at the speed of light (sound) within that vortex and as a consequence its speed is added to that of the vortex moving through space. Two vortices moving towards each other may therefore view light in the opposite vortex as moving faster than Einstein’s ‘c’.
Where does the energy come from to move massive objects? All objects are just manifestations of the Field themselves and will operate according to local field conditions. There is no matter or even mass as distinct from field configurations and so no need for any transfer of ‘energy’ between different type of fundamental stuff.
An object in a gravitation field is moving under its own ‘steam’ . The local field is propagating according to local conditions. Propagation is on a point by point basis and each point has no concept of the total ‘mass’ of the object. This makes it obvious that the acceleration under gravity is independent of the mass of the object.
Precession of the equinox. The Earth is said to undergo ‘precession’, to rotate in the sky in synchrony with the Pleiades star cluster, Sirius and the whole of our Solar System. The whole cycle takes about 26,000 years. Nobody believes that all these bodies are somehow dragging each other around by means of a radiative force. [video]
What is happening is that all these ‘masses’ are caught up in the same galactic helical field vortex which spans several light years and is responsible for the rotation of all bodies within its sphere of influence.
To try to imagine this as a collection of radiative forces is just too difficult but to picture it as a giant eddy current in a flowing galactic ‘river’ gives a nice idea of what is going on.
Newton’s concerns: “That one body may act upon another at a distance through a vacuum without the mediation of anything else, by and through which their action and force may be conveyed from one another, is to me so great an absurdity that, I believe, no man who has in philosophic matters a competent faculty of thinking could ever fall into it.”
Newton’s basic view of the Universe, which is reinforced by his mathematical theories, is therefore one where a collection of solid objects called ‘matter’ float about in an all pervading ‘vacuum’ that by definition has no properties or useful qualities of its own.
This world view pretty much rules out the development of any theory of gravity acceptable to Newton himself!
Matter is regarded as basic and fundamental but again has no ‘qualities’ as such and needs additional properties such as ‘mass’ and ‘charge’ to somehow allow it to interact with the rest of the universe. The rest of the universe meaning other chunks of matter separated by a lifeless vacuum.
The idea of gravity is an embryonic field theory but Newton was trying to graft it on to a system already overloaded with unnecessary concepts. He was trying at the same time to regard matter and space as being at the heart of reality whilst denying them the possibility of distant communication.
He needed to discard these ideas and start from scratch with Field Theory as fundamental and to then add matter and space back in as being subservient to the field, as emerging from it rather than somehow creating it.
General Relativity. Einstein was on the right track with the idea of an all pervasive universal field but in the rubber sheet concept (right), space and matter are still fundamentally different concepts and the idea of a force arises from the interaction between two such different ‘stuffs’.
“Matter tells spacetime how to curve, and curved spacetime tells matter how to move” – J.A, Wheeler
This is circular and mind-bending with causality being shifted from pillar to post and back. Moreover, it doesn’t say how these things communicate with each other. In our example, the Sun would be the cause of a large dimple in space-time, with movement of the Sun registering as further deformations of the field which propagate at light speed.
This doesn’t help our case as no light speed propagation is observed and the data suggests ‘synchrony’ of Sun and Earth rather than distant ‘influence’.
Einstein was still bewitched by the illusion of ‘matter’ as being solid, real, fundamental and indeed causal in somehow orchestrating cosmic events.
Imagine the diagram above but without the mass. We do not need the mass itself as we can easily detect its ‘presence’ by the distortion of space with which it is now synonymous. No mass ‘moves’ as now the rubber sheet itself is endowed with the properties which will cause movement of the dimple i.e. movement within the field itself. Movement which is consistent with the observed laws of physics.
David Bohm, like everybody else, saw separate objects moving around independently of each other and yet at the same time seemingly in step to produce what he called the Explicate Order. Since inanimate objects are not normally capable of organising themselves there must be an unseen Implicate Order responsible for these patterns. [page]
The Implicate Order then is the field equation of Meyl (above) and the Explicate Order is everything else that we see and measure, from the movement of galaxies to the double-slit experiment of quantum mechanics.
The equation specifies the evolution of the field at every point in space and time with field propagation at light-speeds giving the impression of conventional causality.
This evolution, it is to be stressed, is local and confined to an infinitesimally small point, meaning there is no influence from one point to another over any distance at all, even a trillionth of an an angstrom; there is no granularity to reality.
Global order is maintained by a finite propagation speed with the solutions to the equation leading to the large scale patterns we observe, as with the water vortex.
This is the seeming paradox of field equations, that the rules are strictly local but the solutions global. The Implicate order is not a global plan but a local description of field properties, whilst the Explicate Order is the emergent patterns that we actually observe and measure and have mistaken for the Fundamental Laws of Nature.
DNA is not a solid complex molecule which contains so much data that it serves as a blueprint for Life and nor is it self-replicating in any way. Instead is is a spontaneous materialisation and dynamic organisation of matter that arises from the needs and energy flows of the parent cell.
This page contains transcribed material from Stefan’s lecture followed by some comments that try to resolve his descriptions with the laws of contemporary physics.
Stefan Lanka: What biology IS – body and soul biology and the substance life is made of.
Stefan on RNA: “Here we have a typical diagram of DNA. As soon as a small amount of organic material has accumulated along with a few minerals they form themselves and whatever is beneficial for the metabolism stays there longer and is integrated into the chromosomes. This enables our body to learn how to deal with toxins such as alcohol.”
“Bacteria quickly learn to metabolise everything which is presented to them and which doesn’t kill them straight away. What doesn’t kill me makes me stronger!
“The first experiment I did as a biologist was to see that when you keep increasing dioxin concentrations and at the same time suddenly withdraw the nutrient solution, the bacteria start digesting the dioxin. The nutrient solution is depleted and they live solely on this toxin. The same thing happens when antibiotics are used instead of dioxins; the bacteria start to metabolise them.
“If the nutrient solution is then suddenly given back to them and the poison is simultaneously taken away – they die. They first have to re-learn to metabolise the solution and that is what the RNA is for.
“RNA comes in all variations and that is why a PCR test can test anyone positive for anything. All you have to say is that this is the gene sequence for this or that, and after looking for long enough you will find it. Or you let the PCR run for a long time and it will produce sequences that weren’t there before.
“RNA is self generating. RNA is its own catalyst.
“In that sense it is another presentation of Life and of how Life, invisible to us, emerges from this substance water (Pi water)”
Stefan on DNA: “Here we have a model of DNA. Current science believes that DNA defines the body’s metabolism, that it is the dominator. But it is not that in any way. It is a resonator and stabiliser! It changes permanently (continually?) and serves mainly to release energy in the body.”
“The DNA is coiled up in this diagram, as you can see, but geneticists have known for a long time, that DNA is a long strand in the nucleus most of the time, and it only coils up in this X-shaped way when the cell is dividing. So it is ‘unwound’ most of the time. And not only that, it also constantly builds up and unwinds again. It is not a fixed strand that never changes, it constantly builds up and reassembles itself.”
“The reason why the DNA in the cell nucleus does not get knotted up, is because it constantly builds up and breaks down again, it oscillates. Geneticists, who believe in fixed genes, cannot explain this. It’s a constant transforming, a coming into being, a disappearing again. This is our current knowledge about DNA.“
“But if you are stuck in cellular theory you cannot imagine what you see here. You are forced to think in a too complicated way, you are forced to think in incorrect models. Incorrect models that have been imposed on us throughout our history.“
“This history of ours has culminated in corona. I must say ‘Thank God’ because, through it, we have the chance to get rid of this global dogma, to bring it to a controlled implosion.”
On the atomic theory of reality: “With all this, I have a completely different understanding of my body, of the interrelationships and also connectedness with the cosmos. The atomic theory had prevented this understanding.” – Lanka
“In pre-Socratic times we had the Ancient Greek principle of ‘as above, so below’, and the atomic way of thinking destroyed this way of thinking.
“Democritus said that if we keep cutting through the hemp rope, then suddenly it is no longer hemp, suddenly it is no longer known matter; only atoms remain; we don’t know anything about these atoms but they simply must be there. He then presented this atom theory as an explanation of Life: atoms come into contact with each other, molecules are formed and so on.
“This is 2500 year old rubbish, and it stinks to high heaven; it’s simply incorrect. This has lead us to this dead end and maybe, thank God, we are in this dead end, because we must end this global dogma.
“The weakest point in the whole theory is the virus dogma.”
Key points
Our concept of the atom has held back progress
RNA and DNA are energy accumulators
RNA and DNA have some sort of ‘memory’
Both are created ‘out of nothing’
Creation is not sequential but parallel
There is no ‘replication’ as such
The structure of DNA is determined by the cell
PCR tests are just nonsense
Comments
The concept of a molecule that we are familiar with resembles the illustration on the right where hard metallic looking atoms are held together by indestructible looking bonds which are themselves made of metal, glass or sometimes Bakelite.
Physicists don’t think of them like this (I hope!) but this is how most other people will picture then and whilst this is fine for many purposes it is quite crippling for the imagination and will strongly discourage any hypothesis that is hard to reconcile with this visual image.
Stefan blames the Ancient Greeks but whilst they had some theory of atomism or materialist reductionism, I can’t imagine that they had in mind the image presented above.
Indeed Konstantin Meyl insists that the texts have been mis-translated and that what was proposed was more like his theory of vortex physics which supposes that what we call ‘atoms’ are really agglomerations of field vortices:
“Remarkable about the passage by Plato is not only the fact, that the potential vortex already was known for 2500 years and was taken into consideration for an interpretation, but also the realization that during the described transition the smells form. Smell thus would be a vortex property!” – Konstantin Meyl: Scalar waves p.189
Stefan Lanka wants to describe biological ‘substances’ such as pi-water as the current atomic model is in contradiction with his observations, but the vortex model of Meyl is much more sympathetic to his needs.
As above, so below.
Atoms are composed of fundamental (spherical/toroidal) field vortices and molecules are collections of such. Within this model, the transmutations of Kervran do not seem so unlikely and Meyl describes how solar neutrinos may be captured by the water vortices and materialised as electrons in biological systems.
The vortex structure is seen at all scales of physical reality and so the Greek’s principle of ‘as above, so below‘ is now preserved. This follows from the vortex model of Meyl ,which has field movement as fundamental, and the hairy ball theorem which makes a torus structure a necessity; the torus being the only shape able to sustain smooth energy flow without a discontinuity.
Atoms, electrons, blood flow, brain function, weather systems and galaxies are now all composed of the same ‘stuff’ (field vortices) and all conform to a toroidal topology.
If Stefan were to discover the theories of Meyl he would be able to ‘wash and not get wet’.
(DNA) is a resonator and stabiliser! – Lanka Suspension bridges are prone to dangerous resonant vibrations from wind vortices and earthquakes are are therefore fitted with either counterweights to damp oscillations or connectors (right) at odd intervals to prevent the formation of standing waves..
A cell is in a constant state of energetic vibration and it is therefore perfectly conceivable that some sort of damping system is necessary to absorb surplus energy, whether it be from acoustic waves or electromagnetic pulses (photons).
Various scientists have observed that DNA is the perfect structure to form a fractal antenna meaning it will receive a large variety of frequencies and not just a narrow resonant band. Some have the DNA emitting some sort of instructions to the rest of the cell but the most prosaic explanation is that it is there to stabilise cellular vibration.
Energy is conserved and in biological systems this must happen at all physical scales from the whole organism to the sub-cellular level. There should exist, by analogy with electrical systems, buffers, accumulators and transducers all over the place to ensure smooth flow but these considerations are not talked about too much.
Energy comes into a cell according to supply but is used up according to demand and these do not necessarily match up so there is a need to temporarily store energy as it becomes available and to release it again as it is required.
“It changes continually and serves mainly to release energy in the body.”” – Lanka.
Reality at the atomic level according to Konstantin Meyl is best described as a field structure consisting of potential vortices and eddy currents which are in continual movement and interaction with each other. Some vortices stabilise into what we call ‘atoms’ and others into energetic ‘quanta’ such as photons.
Movement and transmutation are continual, with larger whirlpools sometimes absorbing the smaller before splitting again into different configurations. A molecule of water can be split and the oxygen transmuted to nitrogen and even carbon, so we have the main constituents of nucleic acids created on the spot from H2O and spare energy.
This seems precisely what Lanka is describing above: “(DNA) constantly builds up and breaks down again, it oscillates. Geneticists, who believe in fixed genes, cannot explain this. It’s a constant transforming, a coming into being, a disappearing again. This is our current knowledge about DNA.“
Hs observations are clearly consistent with the atomic model of Konstantin Meyl.
The adaptability of cells and bacteria has been confirmed by many researchers with The ‘Hill effect’ demonstrating increased resistance to toxins of not just the poisoned cells, but also their non-poisoned relatives!
Mae-Wan Ho in The fluid genome describes bacteria with a defective lac-z gene adapting to the introduction of lactose. The cells could not process it at first but soon adapted and ‘corrected’ their defective gene, thereby ‘remembering’ the new metabolic process and passing it on to the next generation.
Causation is top-down in biological systems, with the cellular cytoplasm forming a de-facto cognitive system which is obviously capable of interpreting input and registering the response as a very precise and directed alteration of DNA sequences.
This is what has been missed and seems inconceivable to most people: that it is the cellular activity which is responsible for creating the precise structuring of the DNA and not the other way around!
If the DNA is coming and going as Lanka claims, then it cannot itself be the storage medium for cellular memories or metabolic programs. There must be something else.
John Stuart Reid in his video shows that cymatic patterns induced in water droplets were produced more readily if the droplet had experienced them before and the rates increased with repeated exposure, So a memory of the procedure has somehow been created and recalled at a later time in response to a similar stimulus
Where is this memory stored? There is no DNA here that we know of and the molecular structure if water is not fixed so it seems unlikely that any data can be stored in the physical substance of the water> We are again left with the idea that something else is necessary.
A bio-field based on the magnetic vortices as described by Konstantin Meyl is an obvious solution to all these conundrums. Physical patterns or electrical disturbances in the water or in the movement of DNA strands are registered in this informational field and become available fr use at a later time.
If DNA vanishes then it can be re-materialised again from the information residue in the bio-field and if the cell needs to reproduce, a copy of the DNA is manufactured directly from this field by materialisation and transmutation. No replication needs to take place, there is no need to ‘read’ information from the existing DNA strand as the information is already there, held in a separate domain.
Information can be passed down generations in this way and this is what constitutes inheritance; there is no need to pass on the actual physical DNA as this will be reconstituted from the bio-field information – see Telegony
Conscious materialisation. Lanka has said in another video that “Life is the materialisation of consciousness” (Stefan Lanka: vitalism), indicating that the arrangement of DNA base pairs may not be the sole chance of the laws of physics but that there may be some specific organisational principle at work behind the scenes.
Here again the scalar waves of Meyl should be considered. He has said that the brain is a scalar wave computer (What is the brain?) and so we now have an actual mechanism for consciousness that is supported by contemporary physics and is potentially capable of organising not only thoughts but the materialisation and construction of DNA and RNA.
This bio-field is electromagnetic in nature and will therefore be sensitive to disturbances of this nature whether generated inside the cell or introduced from the outside.
What do isolation experiments show? Virologists are very excited about these and imagine that they are somehow finding small quantities of RNA in tissue cultures but from the above comments we see that they are not isolating RNA but instead are actually creating it from scratch!
There is no replication according to Lanka, only materialisation. The RNA strands are being created from the tissue culture and their structure will reflect the conditions in that culture which arise from a combination of the host bio-field, the chemicals introduced into the culture and very likely the ambient electromagnetic field conditions.
Virologists say that certain viruses are very difficult to cultivate and that very specific conditions and procedures are required. Well this doesn’t sound consistent with a naturally transmissible pathogen, instead a strong association between procedure and gene sequence suggests that it is the procedure itself that is responsible for the measured genome sequence.
Virologists also say that they can track a new variant of virus throughout the season by measurement of the genome but all this proves is that the sequencing techniques are somehow sensitive to the seasons and latitude. Either the host organism, the tissue culture itself or the PCR procedure is sensitive to the Earth’s geomagnetic field and it is variations in this that lead to stereotypical changes in the genome.
Kou et al found that different types of influenza (type A, type B, H1N1) tended to predominate in particular locations at particular times of year and that they were often related to dramatic changes in the weather. This does rather suggest that it is the latitude and season themselves that are being reflected directly in the genome sequence.
This is not an unreasonable hypothesis. The body is regulated by a scalar wave network and the cellular bio-field works on the same principle and so we would expect electromagnetic disturbances to affect this process somehow, with the adaptive, interpretive and teleological nature of the cellular system ensuring stable and reproducible results.
The loose correlation between sequenced genome and disease manifestation is also explained. The change in climatic conditions at a specific time and at a specific latitude has the twin effects of making people sick and also of changing the sequencing results. The two effect are linked but not causal with respect to each other, thereby leading to confusion over false positive tests and ‘asymptomatic’ disease.
The idea of a ‘virus’ being ‘replication competent‘ in this scenario makes no sense whatsoever. There is no ‘reading’ of the RNA strand and no molecular machinery to make new RNA to order. The cell itself is in charge of what happens within the cell. DNA and RNA are energy accumulators and transporters and not instruction manuals. DNA is downstream of cellular organisation, not its origins or blueprint.
Cell division. ” DNA is a long strand in the nucleus most of the time, and it only coils up in this X-shaped way when the cell is dividing.” – Stefan Lanka This makes a lot of sense; the DNA acts as an energy buffer, absorbing and releasing energy as required until it is time for the cell to divide.
Once the molecules have coiled up into a helix, the laws of physics cause the strands to form an antenna and sufficient energy is accumulated to power cell division. See Meyl on DNA
Comparison with mainstream explanation.
The mainstream explanation as to how DNA is replicated involves complicated molecular machinery and a sequential construction method whereby base pairs are ‘read’ one at a time and then somehow a new pair is obtained, moved into place and fixed onto the end of the new strand.
Miraculous indeed! This is made to seem reasonable by nicely constructed cartoons and videos but in reality it creates more questions than it solves.
How do these machines work in a dense viscose water gel?
How are the new base pairs moved into place so precisely?
Where and what is the power supply for all this machinery?
How exactly do you ‘read’ a base pair and how is this information represented?
If DNA is constructed by a molecular machine , then what constructs the machine?
Where is the blueprint for the molecular machine and how is it inherited?
In addition to these questions we have the fact that DNA is claimed to consist of 3 billion base pairs that are replicated in about 1 hour. This means that the base pairs are being aligned and attached at a rate of more than 800,000 per second!
This is just not credible without further evidence. DNA is therefore not created sequentially and not transported around the cell or assembled by a machine but created in parallel and constructed on the spot according to either fixed physical laws or information from a distributed bio-field.
The description from Stefan Lanka is one of a gradual emergence from a structured vortex flow that is in tune with the energy needs of the cell. The mechanism is consistent with the known laws of physics from Konstantin Meyl and the observations of biological transmutation from Louis Kervran.
Cellular water is arranged in vortices which continually absorb neutrino energy from the sun and distribute it to the rest of the cell. DNA and RNA act as a buffer to smooth energy flow with temporary excess being used to transmute water to higher energy molecules such as carbon and nitrogen.
These molecules eventually join together in a spiral structure which accelerates the accumulation of energy and in due course will enable cellular reproduction.
The physical changes see in cells can be seen, not so much as the result of mechanical action or ‘design’ but as a reflection of energy management, which works at least in part by the continual transmutation from low to high energy molecular states and back again.
Rupert Sheldrake’s TED talk , “The Science Delusion”, listed ten points of contention concerning ‘accepted’ tenets of modern science. The presentation caused quite a stir and was “taken out of circulation by TED, relegated to a corner of their website and stamped with a warning label.” – Sheldrake
The general theme of the talk is that contemporary physics, as usually described, is mechanical, materialistic, insufficient to describe biology, inheritance or consciousness and is in any case incomplete of itself. Modern science is therefore deluding itself if it thinks it has the answers to everything or even that it could supply the answers to everything, as it is hampered by its own self-imposed constraints.
This is only partly true. There is a strong streak of ‘materialistic’ thinking in all sciences certainly but field physicsand in particular the Theory of Objectivity of Konstantin Meyl do not deal with ‘matter’ or even ‘forces’ as fundamentals of nature and therefore paint a very different picture from the one to which we are accustomed.
The desire to reject ‘materialism’ is fuelled in part by an incomplete description of what actually constitutes ‘materialism’.
The ten points:
Nature is mechanical or machine-like
Matter is unconscious
The laws and constants of Nature are fixed
The quantity of matter and energy is constant and was fixed by the big bang
Nature is purposeless and evolution is without direction
Inheritance is via the continuity of the structure of some physical substance (genes)
Memories are retained in the brain as material traces
‘Mind’ is inside the head and consciousness is just brain activity
Apparent paranormal abilities such as telepathy are the illusions of Bad Science
Mechanistic medicine is the only one that matters
3. The laws and constants of Nature are fixed Yes! Of course they are! If not then how does the universe run? How does it maintain pattern, order and stability? If the laws that maintain order are changing all the time then there must be some meta-laws that determine how these changes occur.
The alternative is that things just happen and anyone who thinks that can just give up on pretending to be a scientist.
The problem we have is not whether or not the laws are fixed but whether or not the laws and constants that scientists use to describe reality are in fact the fundamental laws and constants of reality. Countless observational oddities and internal inconsistencies suggest that they are, at best, incomplete.
The laws of physics as described by Konstantin Meyl are described by a single field equation and from this can be derived the laws of gravity, the Schrödinger equation and the laws of general relativity. So Meyl’s equation can reasonably be described as ‘fundamental’ but the other laws cannot. They are just mathematical representations of isolated laboratory observations.
The speed of light. In his talk Rupert mentions that the speed of light slowed down by about 20 km/s between 1928 and 1945 before resuming its approved value. The response of the standards authorities was to simply re-define the length of the metre in terms of the speed of light so as to correct for the difference So the speed of light is now a constant by decree (but not by observation) and length is no longer fundamental. But what about ‘time’? Is that not fundamental?
We have no direct way of measuring time and the best we can do is to count the number of oscillations of an atomic clock and declare the result to be representative of elapsed time. A big problem with this is the following chart which shows that two atomic clocks in the same room but oriented differently will keep very good time with each other – except during an eclipse!
So we are stuck with a science that somehow regards length as a variable quantity and has no reliable way of measuring elapsed time and we can therefore ask: “What then is meant by speed?” or “How on can we measure distance travelled per second when we have no stable definition of either a metre or a second?”
We have too many variables and no clear idea as to which are to be regarded as ‘fundamental’.
The solution.
Konstantin Meyl cuts through the confusion with a single field equation (below). This equation only is ‘fundamental’ and nothing else.
This is the entire equation and there are no three types of mass, no separate force of inertia, electrostatic attraction, gravity etc. and as a consequence, no need for multiple ‘constants’ to mediate between such entities.
Both time and distance and the speed of light are dependent upon field strength, with high field strength leading to a shrinking of distance and a slowing of time. Light speed can vary in absolute terms but measurements of it will remain constant to the observer because as lengths shrink, so will time slow down, giving the impression to the observer of a fixed light-speed.
The observer is now part of the experiment and will shrink or speed up along with the experimental equipment and the observed phenomena.
It is the variations of the rate of atomic clocks owing to changes in the solar neutrino stream that is likely leading to variations of the measured speed of light.
4. The quantity of matter and energy is constant and was fixed by the big bang. Classical physics is clearly struggling on this one. There can be no explanation of such an initial event in terms of known physics simply because the bang itself, having created the laws of physics must therefore precede them and hence cannot be derived from them.
According to Meyl, ‘matter’ is a stable balance of positive and negative field elements which together cancel each other out. Matter can be materialised from non-matter and can be destroyed again to leave nothing behind. The total amount of ‘energy’ in a particle is always zero and so the total amount of ‘energy’ in the universe is in fact constant and equal to zero.
Einstein’s famous E=mc² is incorrect and Tesla agreed with this, having claimed to have destroyed billions of atoms with no ill effects.
Note that Meyl’s assertions concerning mass and energy derive straight from his single field equation which therefore remains the single fundamental assertion with all other physical entities being emergent properties of those equations.
Contrast this with mainstream physics where the well studied entities matter and energy are held to be fundamentals and obeying the laws of nature but at the same time all coming from the big bang and so cannot really be fundamental. They even derive from something that is not itself part of the laws of nature, is not describable by them and is fundamentally unmeasurable, untestable and un-falsifiable.
The whole framework is topsy-turvy and badly structured. We need a single testable hypothesis but what we have a patchwork quilt thrown together from ideas which are good enough of themselves but bear not much relation to each other.
6. Inheritance is via the continuity of the structure of some physical substance (genes) This is just not true. The phenomenon of Telegony is proof of this, the page on The DNA delusion confirms that inheritance has nothing to do with DNA and the page Evolution and Inheritance puts a good case that inheritance is via some sort of informational field.
It is this field that is responsible for morphogenesis and inherited or ‘innate’ behaviour – does anybody really believe that the nest building abilities of a bird for example could be encoded in a few gigabytes of DNA?
Mainstream biology now only ascribes the function of protein construction to DNA and even then there are only 20,000 genes to encode for 100,000 proteins.
What is inherited is, in most general terms, a dynamic pattern of biological activity, or a set of rules for a molecular or neural network. Stable, dynamic patterns are best represented in terms of ‘attractors’ or closed loop control systems and the suggested physical mechanism for these is the magnetic scalar waves as described by Konstantin Meyl. They are stable, dynamic, can co-exist with matter and are not measurable by modern instruments which s why they gave been missed by scientists so far.
These scalar waves are by far the best candidate for Sheldrake’s morphic field.
A bio-field to create the shape of a snowflake? The image, taken from a Michael Clarage lecture shows distinctive looking patterns in the formation of snowflakes. At the same time it is asserted that all snowflakes are different so how do they achieve self-consistency and variety at the same time?
Physics doesn’t provide a good explanation as to how groups of billions of molecules can apparently ‘know’ what each other are doing so some new physics is needed.
The snowflakes are arranged according to some template which is going to be electro-magnetic and cymatic in nature. It looks like some force-field is creating a pattern in the way the molecules are bonding together. However Martin Chaplin claims that even this is not true, with there being no fixed pattern of bonds and instead a constantly shifting landscape of molecular connections which somehow seem to maintain a precise overall shape.
“In the case of ice the hydrogen bonds also only last for the briefest instant but a piece of ice sculpture can ‘remember’ its carving over extended periods.”
“.. the behaviour of a large population of water molecules may be retained even if that of individual molecules is constantly changing.” – Martin Chaplin: The Memory of Water
So what is it that is constant? What is it that determines the overall shape?
7. Memories are retained in the brain as material traces Ideas that the brain works by arrangements of neurons or movement of chemical currents have been ditched I think for ideas that it works by electric fields or currents but this still isn’t correct. The brain most likely works as a scalar wave processor (What is the brain?)
Scalar waves are stable of themselves and have all the characteristics required of a medium for the hosting of cognitive computation:
Parallel processing
Associative memory
Speed of light response
Energy renewed by solar neutrinos (?)
De-coupled from the physical brain
The last is particularly important. The physical brain has its own supply of energy and nutrients. Brain cells will de and be renewed. To have conscious thought somehow coupled to the physical maintenance of the brain or to even use the same processes as are used by that maintenance would surely result in chaos and confusion?
We require that cognition is kept separate from maintenance somehow. We do not want every physical change in the brain leading to, or being perceptible as, a ‘thought’ and nor can we have ‘thoughts’ requiring physical changes in the brain – this is just too slow.
The first computers used mechanical levers to implement logic circuits but they were very slow, the maintenance cost was proportional to the amount of thinking and the complexity of thought was limited by the complexity of the physical structure of the machine.
Modern computers are a big improvement, are much faster and the complexity has been factored out into the software which runs as electric currents. ‘Portable’ software means that the computations are now independent of the hardware that they are running on.
Computers do not maintain themselves however so that electric currents are available for computation whereas in the human brain, electric currents have physical consequences not necessarily related to the intent of conscious thought. Using scalar waves is therefore a much better solution for thought processes that are to be largely independent of the physical state of neurons.
One free miracle: “As Terence McKenna observed, ‘Modern science is based on the principle: ‘Give us one free miracle and we’ll explain the rest.’ The one free miracle is the appearance of all the mass and energy in the universe and all the laws that govern it in a single instant from nothing.” – Sheldrake
So modern science is really asking for a whole set of interrelated miracles which seem finely tuned to permit the existence of life:
“The universe looks more and more like a great thought rather than a great machine.Mind no longer appears to be an accidental intruder into the realm of matter… we ought rather hail it as the creator and governor of the realm of matter.” – James Hopwood Jeans (Physicist, mathematician, idealist)
“The characterization of the universe as finely tuned intends to explain why the known constants of nature, such as the electron charge, the gravitational constant, etc., have the values that we measure rather than some other (arbitrary) values. According to the “fine-tuned universe” hypothesis, if these constants’ values were too different from what they are, “life as we know it” could not exist. – Wikipedia
“The fine-tuned universe is the proposition that the conditions that allow life in the universe can occur only when certain universal dimensionless physical constants lie within a very narrow range of values, so that if any of several fundamental constants were only slightly different, the universe would be unlikely to be conducive to the establishment and development of matter, astronomical structures, elemental diversity, or life as it is understood. Various possible explanations of ostensible fine-tuning are discussed among philosophers, scientists, theologians, and proponents and detractors of creationism.” – Fine tuned universe
So a little confusion maybe with the scientists unable to explain in terms of science how the fundamental constants arise and the creationists seizing the opportunity to preach intelligent design. However, both are basing their views on what is apparent and not what is real; both are assuming that the description they have of reality is the best available.
Konstantin Meyl provides the most consistent description of physical reality so far with his Theory of Objectivity. This is based upon a single field equation (see above) and all the ‘fundamental’ constants are derived from this so there is nothing fundamental about them at all.
Meyl has calculated, just from his single equation and with no additional input, the masses of the elementary particles and the radii of the elements [more]
So we really are in a situation now where we only need a single ‘miracle’, which is the prior existence of some medium, the behaviour of which is consistent with the field equation.
“Sooner or later even the last natural scientist will realize, that nature does not provide ‘constants’ at all. If ‘constants of nature’ did exist, as listed in textbooks and encyclopaedias, then they aren’t consistent with causality, since we don’t know the cause, why the factor should have exactly this size and no other. Behind every so-called constant of nature unnoticed is hiding a physical closed loop conclusion. (solution)” – Scalar waves p. 599
So ‘causality’ here remains within the realm of the physical world or more accurately, within the (theoretical) confines of the Theory of Objectivity.
9. Apparent paranormal abilities such as telepathy are the illusions of Bad Science The root cause of this attitude I think is not that there is lots of bad science around (there certainly is) but that paranormal phenomena have, by definition, no plausible mechanism within the accepted scientific frameworks.
This leads to a view that “If there is no mechanism then it isn’t science and so it isn’t really happening.” This isn’t quite true of physicists though. Reading books and papers on biology and consciousness written by physicists it seems that almost all of them believe in some sort of telepathy and even life after death.
The reason is that they are used to working with ‘insubstantial’ entities such as force fields, ‘information’, quantum entanglement and action at a distance. The brain is assumed to work by electric fields and these are the ideal candidate for transmission of thoughts.
Konstantin Meyl describes instead magnetic scalar waves and wave resonance as being the medium of choice for thought transference. These turn out to have precisely the properties required to describe many experiments on ESP.
Are hypothesised to be the medium for cognition
Can form persistent connections between two individuals
Can penetrate walls
Resonant connections do not diminish with distance
Connections may be stronger between related individuals (The ‘Hill effect’)
The existence of a putative mechanism now means that there is something to investigate, something to try and measure or in other words some chance of doing some proper science.
Dean Radin (pictured) is arguably at the forefront of ESP research and is mentioned by Sheldrake. He and others have tried to make a science out of PSI research by introducing rigorous controls and by attempting to remove bias by the introduction of random number generators.
The problem with random number generators however is that there is no guarantee that they are in fact ‘random’. Many are based upon some assumed random process from nature such as radio-active decay but The Shnoll Effect shows that these figures depend upon planetary alignments such as eclipses and the page Neutrinos, eclipses and plagues gives the mechanism as variations in the solar neutrino stream.
One experiment from Radin showed an apparent ability of subjects to introduce a bias into the double slit experiment by thought alone. The choice of slit for a particle to go through had a slight bias that was different from a control experiment.
Dean commissioned some statisticians to repeat the experiment and to comment on the results, [here]. Wallacczek, and Stillfried were unable to produce the results . In addition to this they tried the experiment again but this time with no test subjects at all. They found that they still got a positive result, a difference in bias between the two setups, even with no ESP attempted!
The authors offer various explanations for why this might be, including: “For example, the detection method may manifest a sensitivity to (as-yet) unknown physical factors which are beyond the ability of the particular method to reveal, track, and identify“
So variations in the neutrino stream could conceivably be influential in the irreproducibility and could even be the cause of the effects manifest in the first place.
Despite their best efforts then, ESP researchers may be discovering, not paranormal abilities, but subtle physical influences unknown to most scientists.
2. Matter is unconscious Whether or not this is true depends upon precisely what is meant by ‘unconscious’ but the page The origins of life presents an argument that there is effectively a world parallel to the physical that might be called etheric and consists of an informational field which organises and animates all physical matter.
Assumptions that there is ‘something else’ need to re-examined now, as it is entirely possible that with the recent discoveries by Meyl, we have everything we need in order to explain all of the observations and measurements that we can make of the world.
No sensible discussion on consciousness can take place until we have a reasonable definition or characterisation of: consciousnes
5. Nature is purposeless and evolution is without direction The standard view of evolution is one of small random variations of DNA leading to small random variations of phenotype which are then selected for, with propitious variant surviving to reproduce.
Keith Baverstock
Now quite apart from the fact that DNA has very little to do with inheritance (The DNA delusion), the way that neo-Darwinism is phrased somewhat skips the fact that all development must be according to the laws of physics and must involve rather stable patterns of molecular arrangement or we are finished before we have even started.
The interpolation of DNA and some imaginary transcriptional mechanism has conceptually de-coupled the evolutionary process from any physical law or principle and reduced it to theoretical randomness whilst at the same time giving the impression that almost any end product is possible. In reality though, the construction of a human being must obey some quite restrictive conditions and must be stable to perturbations at all stages of development and evolution.
The ‘direction’ of evolution therefore is towards ever more efficient ways of transducing solar energy into functional shapes and units: Evolution and entropy. Organisms use the laws of thermodynamics to their evolutionary advantage instead of fighting against them, as explained by Baverstock and Rönkkö:
“In summary, we propose that the life process is based not on genetic variation, but on the second law of thermodynamics .. and the principle of least action, as proposed for thermodynamically open systems by De Maupertuis (Ville et al. 2008), which at the most fundamental level say the same thing. Together they constitute a supreme law of physics..” – Baverstock and Rönkkö
“All results of the evolution in the biosphere that have arisen between the ‘capacitor plates’ of the earth itself and its ionosphere can be regarded as structured capacitor losses, which also apply to humans” – Konstantin Meyl
9. ‘Mind’ is inside the head Yes. Various people have postulated various levels of exotica including a whole extra dimension to house all our memories, but a magnetic scalar-wave network seems sufficient to describe consciousness.
The impression that our thoughts and visions are ‘out there’ is a clever and necessary illusion created by the structure of our cognitive system. (see video above).
The brain maintains spatial awareness by constructing a (literal) internal space with the ‘self’ at the centre. The outside world is big but the brain is small so the internal space is wrapped around in a nested torus system so as to fit it all in.
“The Regularities of Nature are essentially habitual”– Rupert Sheldrake The field equations of the Theory of Objectivity are fixed but will organise into stable and adaptive control systems at a very early stage and hence manifest as higher ‘laws’ which may well have become ‘habitual’ over a million years of evolution.
Habituation, then , is not at the roots of the laws of physics but an emergent feature of them.
R.S. gives an example of the growth of certain crystal structures which once seemed impossible but now are routine. It seems to be assumed that crystals are formed by the random banging together of molecules which fall into some natural alignment because of their regular shape but if we hypothesise for one moment that there exists a hidden field that induces organisational forces on the molecules then the situation becomes clearer.
Existing crystals lead to an attendant magnetic field which is the entity that acts as the nucleating structure, not the molecules themselves, promoting further growth. Changes in geophysical factors such as the Earth’s magnetic field or variations in neutrino stream operate on this field directly and thence on the the physical molecules indirectly, to produce the changes in the patterns observed.
The experiments of Giorgio Piccardi clearly show time variations of measurable parameters in both biological and chemical processes.
Things which seem both variable and fundamental at the same time are certainly not fundamental but ’emergent’. This is the main reason behind the Science Delusion itself:, that downstream effects have been mistaken for root causes and and variables taken for constants:
“The Science Delusion is the belief that science already understands the fundamental nature of reality in principle, leaving only the details to be filled in” – Rupert Sheldrake
The Law of Gravitation from Isaac Newton is described as consisting of a force-field that emanates from an object by virtue of its mass and affects other objects at a distance by virtue of their mass. Newton himself was not at all happy with the idea of action at a distance. Konstantin Meyl fixes the problem.
According to Wikipedia, the modern formulation of Newtons Law of Gravitational attraction is as follows:
“Every point mass attracts every single other point mass by a force acting along the line intersecting both points. The force is proportional to the product of the two masses and inversely proportional to the square of the distance between them” – Wikipedia
So each object in the universe is having an effect on other objects, possibly a great distance away. There is no mechanical connection but the idea of something called a ‘force’ has been introduced to make the whole thing seem more plausible.
Newton formalised this and produced a workable theory which was vindicated by experiment, but he wasn’t happy with the implications:
“That one body may act upon another at a distance through a vacuum without the mediation of anything else, by and through which their action and force may be conveyed from one another, is to me so great an absurdity that, I believe, no man who has in philosophic matters a competent faculty of thinking could ever fall into it.” – Newton 1692 – Wikipedia
Good man!
The formulation of this ‘influence’ as stated above gives the idea that there is some sort of connection between two distant objects and each is having a causal influence on the other across a space of possibly millions of miles. This impression is so strong that it is given a name, ‘gravity’, and physicists adopt it as a real entity.
Years later, Newton was to write: “I have not yet been able to discover the cause of these properties of gravity from phenomena and I feign no hypotheses…. It is enough that gravity does really exist and acts according to the laws I have explained, and that it abundantly serves to account for all the motions of celestial bodies.” – Newton 1917 Wikipedia
Newton now accepts that gravity itself as an existing phenomenon and is instrumental in the movement of all celestial bodies. The problem of action at a distance has been circumvented by framing gravity itself and not the distant object, as the causal factor, the prime mover.
Progress has been made; the thing causing an object to move around is now not a mass many miles away, but the strength and direction of a local ‘field quantity’. The immediate cause is not distant but but proximal. This marks the start of a move away from material or mechanical action and towards a field physics where abstract field interactions are paramount.
The modern formulation places the particles of distant mass as doing the attracting, as being the first link in a causal chain acting through gravity as a mediator. Newton, however, could find nothing that could be the cause of gravity and so merely had to accept its existence.
In the paragraph quoted above, Newton doesn’t even describe it as a ‘force’ but only says that it accounts for the motions of the objects.
So what is it that causes the gravitational field? In the field physics of Konstantin Meyl, the field is ever present and evolves according to the field equations of the Theory of Objectivity. There is no ‘mass’ needed to account for the source of the field, no mass for the field to act upon and the motion is not described as being caused by a ‘force’
There are no ‘objects’ in the theory of Meyl and no ‘matter’ exists as distinct from the field. Instead, what we call ‘atoms’ consists of stable states of field vortices which combine together to form molecules and again to form objects, humans and planets.
There is no separation between field and matter and so no need to describe mechanisms by which one may affect the other. Matter and Field are continuous with each other, made of the same ‘stuff’ and subject to the same laws.
The idea of causation as usually conceived, depends upon some sort of separation, some distinction between discrete objects so that an effect or influence may pass from one to the other, possibly via some intermediary such as gravity. This results in a proliferation of concepts, influences and ‘stuffs’ such as gravity, mass (three types no less!), charge, magnetic force, inertia, energy, the permittivity of space etc.
With Meyl’s theory, the field develops according to the field equation at every point in the universe and the emergent patterns are what we perceive as reality. In practice this means that various patterns are formed (eg planets) which result in a concentrated field strength that diminishes with distance and it is this that appears to act as as some sort of ‘force’ field by virtue of the effect that it has on other field variations (other planets, falling apples, human beings).
There is no real matter, mass or forces, merely the illusion of such. The moving together of two ‘objects’ is not by gravity or any action at a distance but by the interaction of the field with itself.
‘Causes’ as such do not travel all over the place but field changes propagate at the speed of light giving the impression of separation and causality whereas in actuality, everything develops as an undivided whole but according to local field conditions only.
Radioactive decay. Large molecules will ‘decay’ by the emissions of particles and photons at seemingly random time intervals. If we are to believe in an ordered universe at all then we should not be satisfied with explanations involving the random fluctuations of quantum vacuum energy but instead look for a cause in the physical universe.
The solution proposed by Konstantin Meyl in his book Scalar Waves is that neutrinos from the sun will occasionally pass close to an atomic nucleus and the field disturbance thereby created will supply sufficient energy to destabilise the structure and result in the emission of a wave-particle.
This is causal now instead of random and can therefore be tested. If the decay rate depends upon neutrino density then any change in that density will result in an increase or decrease in decay rate.
Thousands of measurements made by Simon Shnoll confirm this hypothesis. Radioactive decay (right) varies according to the time of year and the phases of the moon. The Shnoll Effect
Atomic clocks are based upon such decay and we would therefore expect to see altered timekeeping during significant cosmic events such as eclipses. This turns out to be true with even clocks in the same laboratory losing synchronicity if they are aligned differently with respect to the sun.
Atomic clocks during eclipses The times on two caesium atomic clocks were compared and were seen to be generally in agreement except during an eclipse.
The two clocks were in the same laboratory but were oriented in different directions. This is unexplained by mainstream physics but is not at all surprising if we accept the physics of Konstantin Meyl.
Meyl puts the effect down to an additional energy input from solar neutrinos which become more focused when the sun, earth and moon are in alignment.
Further data from around the world support this correlation and the effect is so strong as to affect various Foucault pendulums around the world. “The effect of an eclipse of the sun, to which for instance a Foucault pendulum reacts, can equally be traced back to the interaction of the neutrinos as the free energy.” – Scalar Waves p441.
The experiments of both Giorgio Piccardi (right) and Vlial Kaznacheev both showed that biological processes were affected by the season, phases of the moon and particularly by eclipses. The explanation given by Meyl is that biological systems will absorb energy from solar neutrinos and utilise it to their advantage.
The gravitational field of the moon act as a lens, focusing the neutrino stream on some parts of the Earth’s surface, creating ‘hot-spots’ of biological activity (not always beneficial).
From Meyl’s book:
The shadow of an eclipse follows a very precise path across the globe, making it ideal for investigating correlations between the neutrino stream and global events.
Meyl has linked variations in the neutrino stream and alteration in field vortices to both earthquakes and plagues as well as mere sleep disturbances:
“Whoever places himself in the centre line of the complete shadow on August 11, at first will detect a decrease of the neutrino radiation to 50 to 60 percent, then a steep increase to 2800 percent and from the summit again the whole backwards, while standing on the earth he turns by under the moving moon shadow. The ring with half the radiation, which reaches us first, doesn’t pose a problem since, as said, we only have half the radiation in every night. Some animals and plants as a result erroneously will set out for sleep.” – Scalar waves p. 417
“In the case of an eclipse of the sun effects on the biology, like problems with the heart among affected, at least can’t be excluded. If the scalar wave density increases above the density which is normal, then this has a positive effect on the economy of energy, as long as the body is capable to regulate the amount taken up. If the regulatory process however should fail . then the risk of a self-inflammation exists. “
“Also straw bales and other organic and inflammable materials could thus go up in flames.”
“But before that happens, first the information technical influence of the scalar waves will show. Here we have to expect a psychotronic influencing, which is showing in a limited ability of perception. History teaches us as an example that a by Thales of Milet predicted total eclipse of the sun at 28.5.585 B.C. compulsorily has ended a battle in Asia Minor between the Medes and the Lydians, because the soldiers apparently most literally had gone out of their mind” – Scalar waves p. 419
Death and war play dice about the fate of mankind during the eclipse of the sun of 1562
There are in fact several papers on the effects of eclipses on health but results are patchy and research is struggling as they have no putative mechanism to investigate and so effects are assumed to be largely psychosomatic. Patients with schizophrenia appear to be particularly sensitive, showing both behavioural and physiological changes during the eclipse:
“Of the hormones studied, it is prolactin which showed an increase in titre associated with behavioural abnormalities in concerned patients during and immediately after the total solar eclipse.
“We find that over the six post eclipse days the previously increased titre of prolactin shows a tendency to come down gradually to the normal and behavioural abnormalities and symptoms like sleep disturbance, restlessness and anxiety were seen in patients” – Boral et al
The brain is a scalar wave computer. It is formed from a scalar wave template and when developed will host a toroidal standing wave complex which acts as the computational centre for holistic cognition. Communication with other parts of the body is by means of longitudinal scalar waves via the myelin sheath surrounding the nerves.
In the case of the Brain of a White Collar Worker a man only had 90% of a full sized brain, He had some leg weakness on one side and a low IQ of around 75 but still managed to maintain a job as a civil servant and to raise a family. This case is cited by some as proof that the brain is not the centre of intelligence and has some other purpose.
We are told by neuro-scientists that the functions of the brain are arranged geographically, with some areas responsible for emotional regulation and others processing visual information etc. Either the brain above has compensated in a spectacular fashion or what we are being told is simply not true.
John Lorber 1915-1996 produced images of hundreds of brains and found many cases of hydrocephalus that had resulted in reduced brain size but with often no great cognitive impairment. In one case, a young man had an IQ of 126, gained a first class honours degree degree in mathematics and had normal social function but hardly any brain.
“When we did a brain scan, we saw that instead f the normal 4.5 cm thickness of brain tissue between the ventricles and the cortical surface, there was just a thin layer of mantle measuring a millimetre or so. His cranium is filled mainly with cerebrospinal fluid.” – John Lorber
The man had been referred to a physician as a boy because his head was slightly larger than normal.
What does all this mean?
The conundrum here is that the human head is quite large and uses up a lot of resources, which by itself is an evolutionary disadvantage. There must be some other pressing need then for a large cranium although brain volume seems irrelevant.
The logical conclusion is that the important factors in the workings of the brain are not the volume or number of neurons but instead the overall shape, size and proportions of the organ itself.
To see how this could be so we will need to understand a bit about embryonic development, fluid pressure, scalar waves, electromagnetic forces, fractal holo-fields and the golden ratio.
If, after this, things still seem a bit incredible then we can recall the words of Sherlock Holmes: ”When you have eliminated all which is impossible, then whatever remains, however improbable, must be the truth.”
In embryonic development we find that blood flow precedes the development of the blood vessels and apparently acts as a guide for their development somehow. Electric fields are suspected and this idea is reinforced by the observation that spiralling blood flow in the aorta is instrumental in the formation of the heart as a spiral vortex machine,
Once the heart is formed, regulation of pressure serves to refine the shape and determine the dimensions of the arteries and indeed the thickness of their walls.
Consideration of development is important. Evolutionary processes are commonly evaluated according to their function but what is hardly ever discussed is that every physical feature in biology has to have a physical cause; there has to be some developmental plan that can result in that organ or ability,
The developmental function of the early brain then is to increase in size thereby exerting a gentle outward pressure (static electric forces) on the still malleable skull and causing it to expand at a controlled rate. There is no need for DNA to be involved here, the forces are physical and the ‘plan’ is simple.
The brain grows in a particular way which determines the rate of expansion of the skull. Grey matter is added in a way that results in a ‘blooming’ much like a cauliflower or cloud might develop. This allows for a refinement of shape which a simple balloon-like inflation would not.
A skull that is expanded via a filling of water will experience equal pressure in every direction and tend to be larger, wider and more spherical than the norm.
The Golden Ratio
The normal skull is not spherical though, it has a very specific shape of very specific proportions and those proportions involve the Golden Ratio.
So to provide fine-grained control the shape of the developing brain then we need a morphogenic field that somehow ‘knows’ about the Golden Ratio.
As luck would have it, the scalar waves of Konstantin Meyl are the ideal candidate for such a function. It isn’t so much that they are capable of such a ratio but that they naturally form three dimensional structures whose most stable state has dimensions in the Golden Ratio.
So these dimensions then are actually ‘hard-coded’ into the laws of physics and it should not be surprising then to find them cropping up all over the place. As an example, the dimensions of the red blood cell are also in this ratio: Blood flow and scalar waves
So a series of linked toroidal scalar waves are suspected of being instrumental in the development of the brain and skull. But what happens once development is complete?
This magneto- electric field now has another function which is to act as the substrate for cognition. The whole brain is the host for a distributed ‘holographic’ field which is responsible for information management for the rest of the body as well as intellectual and emotional computation.
The field is non-dissipative and maintains stability as a toroidal attractor state with the ideal dimensions to suit its physical nature.
Mae-Wan Ho has described the field in the brain as a sequence of nested torii with each layer vibrating to a different frequency and the ratio of the frequencies between adjacent layers as being equal to Φ, the golden ratio again. This ensures that there is minimal resonance between layers of the field and hence least interference but maximal independence between layers of the field. Good design.
Signals are sent to and from the brain via the nerves but again using scalar waves as the transmission medium: Scalar waves and nerves
Physicists almost unanimously require that the field be holographic in nature, meaning not that it is an illusion but that each part of the field contains all of the information rom the entirety of the field. This means that the field is also fractal (self-similar) in nature with any small part being a miniaturised version of the field as a whole.
The structure of the torus is ideal for representation of such field, being supportive of stable, resonating scalar waves and being describable by the same laws of nature at all physical scales of reality. There is clearly a need in biology for information to move freely from the macro to the sub-atomic and back so the idea of a holo-field is pretty much a necessity given only this requirement and nothing else.
In the new field physics of Konstantin Meyl, there is no Plank Length, no minimum size to any piece of the universe and so any piece of bio-field can theoretically hold an arbitrary large amount of information.
In one experiment, tissue from a human brain was implanted in a mouse and an immediate increase in learning ability was demonstrated, leading the experimenters to conclude that it isn’t so much the size of brain that is important as the quality of the tissue.
Another interpretation is that along with the material substance of the brain, the scientists had transplanted a piece of the holo-field containing all of the information from the human brain including memories, emotional processing and sense of self. This structure had merged with the field of the mouse to produce what is essentially a single hybrid consciousness.
Advisable not to try this at home, maybe.
‘Life after death‘ experiences are recorded where a patient will describe complex and coherent experiences that happened whilst zero cortical activity was recorded. This is because the scientists were recording classical electric fields only which are radiative and hence measurable. Scalar waves are non-dissipative and difficult to measure.
“Brain death is a lie, it has always been a lie and it continues to be a lie” – Paul Byrne M.D. Many people have made complete recoveries after a diagnosis of brain death. Many people have had their organs removed whilst arguably still alive. Clearly the wrong thing is being measured. [video]
Summary
The brain is a scalar wave computer whose proportions derive from its development and are also instrumental in its eventual function. The overall dimensions are crucial to its performance by electromagnetic vibration and not chemical exchanges in the neurons.
A millimetre of grey matter appears to be all that is necessary to create a toroidal signals transducer of Golden Ratio dimensions but brain geometry that is irregular can disrupt the standing wave within the skull and result in impaired cognition.
Damage to specific areas of the brain will disrupt the field in specific ways which makes it appear that function is somehow attached to physical material when it is really a ‘holistic’ or holographic field with information distributed across the whole field and very likely throughout the entire body.
This page looks at the epidemiology of influenza and asks if it is not somehow related to the newly discovered phenomenon of magnetic potential vortices. The idea is that at certain times of the year there is an increased likelihood of structured electromagnetic discharge from the ionosphere that is somehow causing outbreaks.
The epidemiology of flu demonstrates several outstanding and well documented features that need some explaining:
Seasonality with sharp peaks at winter solstice
Latitudinal correlation of outbreaks
Hemispherical correlation – an epidemic in the Northern hemisphere is followed by an epidemic in the Southern hemisphere
Tropical outbreaks – in both summer and winter
Local outbreaks independent of population density
Seasonality The chart shows deaths from influenza and pneumonia. The seasonal accuracy is striking, with peak deaths occurring close to winter solstice and the ‘base rate’ in summer remaining at a constant level.
This sort of phenomenon cannot be caused by light or temperature levels as these vary from year to year and vary greatly according to latitude.
What is happening is that the health of the population has become entrained somehow to the seasonal rhythm and is for some reason more susceptible to disease at mid winter. A not unreasonable hypothesis is that this resonant entrainment is in response to some feature of the Earth’s magnetic field as this will be independent of both temperature and light levels.
Correlations with day length or humidity are from this point of view, illusory.
Latitudinal synchrony. This chart from the Fred Hoyle paper Viruses from Space (originally from Hope-Simpson) shows influenza rates in Prague and in Cirencester, UK. They both lie on the same latitude and both show remarkably similar patterns. Other studies support this pattern.
Note that in the winter of 1973-73, the peak rates are delayed past solstice in both places by the same amount. This suggests that it is maybe not the population that is directly attuned to the seasons but rather that some other cause is responsible for the flu and it is this phenomenon that is itself strongly seasonal but is capable of variation
The attack rates of influenza in Prague and Cirencester(Hope-Simpson)
A departure from solstice is seen and it is consistent along a line of latitude.
Magnetic field vortices. Shown is a mini tornado, a vortex of spinning air that can form seemingly out of nowhere and vanish when sufficient energy has been dissipated. The physics of Konstantin Meyl allows for such vortices, not only in the physical substance of the air but also in the magnetic field of the Earth itself.
The surface of the Earth and the ionosphere form two capacitor plates with a potential difference of about 200 000 volts and classical physics allows for a discharge between the two as either a steady slow current spread out over the whole planet or a sudden violent discharge in the form of lightning during a storm.
The newly formulated equations from Konstantin Meyl however allow for instabilities in the field to form vortex currents and to discharge much in the same way as the mini-tornado.
Corona discharge. Such vortices can be seen in an extreme version as corona discharge coming from protuberances on power lines but there isn’t any reason why somewhat subtler energies should not discharge in an invisible and apparently harmless manner from ionosphere to ground.
Tropical outbreaks are discernible in the data from the Hope-Simpson paper. It appears that influenza is sensitive to what must be a very small stimulus so either the population is resonating to the seasons or some other intermediary is doing the job.
Outbreaks in the tropics are in both summer and winter solstices. Smaller outbreaks also occur outside of the tropics at other times of year.
These seem like harmonics. The ionosphere is resonating somehow like a large electromagnetic bell with a fundamental frequency of one year and harmonics at six month and three month intervals.
Hemispherical correlation. Another chart, again from Hope-Simpson, shows the coincidence of an ‘epidemic’ in the Northern Hemisphere followed by a similar feature six months later in the Southern.
Viral transmission – not credible
Solar effect in the north waits for six months and re-emerges in the south – not particularly credible
Again this looks like an annual effect of some atmospheric resonance. A standing wave of some sort is present and effects apparent on one side of the planet are seen on the other side at a 180° phase difference without necessarily passing through intermediate points.
Videos of resonating membranes and balloons help visualise what might be happening with the ionosphere. The (1,1) mode right shows a standing wave developing with large amplitude at each ‘pole’ and smaller amplitude within the tropics (centre).
This is the basic mode for resonating waves and superpositions of higher degree harmonics on top of this can explain the finer grained seasonal effects. Note that the Earth’s physical and magnetic bodies do not form a simple symmetric system like a balloon. The physical and magnetic poles are not in the same place, the Earth is at an angle and is spinning with respect to its own orbit around the sun.
The ionosphere is not necessarily vibrating with a physical amplitude but rather a variation of magnetic characteristics.
Local outbreaks of flu were studied by Fred Hoyle and summarised here: The HART group model. It is easy to think that if flu tends to occur in localised groups that it must be infectious but the data studied by Hoyle not only did not support this view but actually ruled it out.
The outbreaks were localised but were scattered in a fashion that was random with respect to location and random with respect to the population density. Hoyle believed in some sort of viral cause and so reached the only conclusion he could which was that the virions had come from outer space and were only available during winter because of the location of the Earth within the solar system.
Time to consider localised magnetic vortices instead.
The hypothesis of this page then is that:
Magnetic potential vortices are responsible for influenza outbreaks.
Such vortices can be as thin as a pencil beam or as wide as a cruise ship
They will cause about 10% of the affected population to succumb to flu
They are more prevalent in winter
They are produced by some magnetic instability that respects latitude
They are not easily measurable by scientific instruments
They can pass straight through the roof of a building
Anecdotes of entire families or hospital wards getting flu at the same time now make sense; they were all in the same room at the same time or went out on a walk together and had their regulatory systems disturbed by the same magnetic field discharge.
Do such vortices exist?
Images of magnetic vortices have now been created from data produced by radio telescopes.
In this video Cleo Loi explains the process and shows what appear to be the upper half of magnetic ring vortices in the atmosphere which organise ionised gas particles (plasma) into the shapes seen.
Now if these magnetic field patterns show some seasonal variation and latitudinal affinity then they are surely a good candidate for the initial cause of the processes described above.
Man made radiation. There is much evidence to show that proximity to cell-phone towers increases the risk of chronic diseases such as cancer but can also trigger episodes of influenza; so much so that Soviet scientists labelled the disease ‘radio wave sickness’. [link]
Konstantin Meyl is claiming that it is not the radio waves themselves that are having biological effects but rather the scalar waves )potential vortices) that are emitted as an artefact and are measured as ‘noise’ by scientific instruments.
Much circumstantial evidence points to magnetic vortices as responsible for biological organisation and Meyl has stated simply that ‘the brain is a scalar wave computer’.
This then accounts for the sickness resulting from cell-phone towers and also for the confusion created when performing studies. They are measuring the wrong thing, and getting inconsistent results; they are measuring the transverse radio waves instead of the longitudinal.
Pneumonia Why is influenza associated with pneumonia and why does the one seem to transform into the other? Why does pneumonia seem to happen mostly in hospitals and why is it not infectious? Why do diseases seem to get worse as more patients are added to a crowded ward?
Pneumonia is characterised by a deterioration and eventual necrosis of lung tissue followed by a bacterial proliferation which serves the purpose of removing the dead tissue.
The lungs gave a lot of work to do which makes them somewhat sensitive to energy or oxygen deficit. They must maintain two separate blood circulations, one to collect oxygen from the inhaled air and another to supply oxygen to the lung tissue itself.
The blood moves around the capillaries powered by scalar waves (Blood flow and scalar waves) and this movement serves as the one of the main powerhouses for the circulation to the whole body.
Heart rate and hence blood flow are reduced immediately upon posture departing from the vertical (The Heart and Circulation) and so lying in a hospital bed is already reducing oxygen to all parts of the body. Not a good healing state.
Meyl has stated that atmospheric moisture contains stored energy in the form of scalar waves and that this energy can be released into the lungs upon inhalation. Scalar waves can continue to absorb energy from solar neutrinos and can release it in various forms:
Movement aiding the flow of blood
Materialisation of electrons
Transmutation of other elements (?)
The release of oxygen into the blood by the splitting of H2O
So a vulnerable person succumbs to flu one winter, feels dreadful and is admitted to hospital as a precaution. The windows are closed and they are breathing air that has been depleted by other patients and possibly re-cycled via an air conditioning system, They are lying horizontal which reduces circulation automatically and therefore deprives the body of oxygen and the blood of its locomotive force. In addition to this they are very likely surrounded by various electronic devices emitting an unholy mixture of microwaves and magnetic vortices.
The lungs have been deprived of oxygen and energy, the tissue is stressed to breaking point and pneumonia ensues.
Florence Nightingale said that as patients were added to a ward, it wasn’t that new diseases emerged but that existing diseases got worse. We can now give a reason for this and also to give some explanation for legionnaire’s disease; the recycling of air by whatever means is leading to a dangerous lack of scalar vortex energy.
Summary
The hypothesis outlined above is somewhat speculative but there is good evidence for each of the separate parts.
An explanation for the epidemiology of influenza is a necessity for any hypothetical mechanism and this rules out a virus as a cause. The idea of magnetic discharge from the ionosphere however is consistent with population data and there are documented mechanisms for causing disease: EMF and Biology
Questions as to how the effects of 5G ,for example, can mimic an assumed seasonal viral disorder are now answered by saying that the symptoms are the same because the cause is the same and hence the disease is also the same in each case. It is a bio-regulatory disturbance caused by magnetic potential vortices of some form or another.
Various researchers have given up on determining how blood flows around the circulatory system and have decided that whatever energetic motivation there is from the heart is insufficient to describe the total flow and that there must be some other forces at work.
“The widely accepted pressure-propulsion circulation model fails to explain an increasing number of observed circulatory phenomena”
“Experimental and phenomenological evidence suggest the opposite, namely that the blood possesses autonomous movement sustained by the metabolic demands of the tissues at the level of microcirculation.“ – Branko Furst
So the blood appears to move by itself. This contradicts the laws of physics, so we must look to some other source of energy to explain the blood flow and some physical mechanism by which this energy is harnessed and converted to kinetic energy.
Assertions:
Blood flow is organised into scalar waves
Scalar waves exist within red blood cells
Scalar waves form within the blood plasma
Scalar magnetic waves may exist as an etheric blood flow
These waves are energised by solar neutrinos
These structures are instrumental in circulation of the blood
Start with a drawing from Viktor Schauberger of water flow in a pipe.
The flow of water is largely spiral and almost friction free. The main body of the water has separated from the walls of the pipe (observed by Schauberger) thereby reducing friction even more and toroidal ring structures (scalar waves) exist at intervals to act as bearings to further help reduce friction and propel the main flow.
Such flow was measured by Schauberger (here) to demonstrate a sinusoidal response to increasing pressure and to actually develop negative resistance at certain flow rates. This result was reproduced by independent researchers and implies that some extra input of energy is coming from somewhere.
An image from Charlie Peskin’s PhD thesis shows fluid flow (top to bottom) through a valve structure such as may be found in the heart or veins. Vortices can be seen forming around the valve outlet and they will, when fully formed, close the valve behind them.
Another image from the paper by Merab Beraia showing spiral structures everywhere in the arterial system with even supposedly ‘turbulent’ blood flow being comprised of highly organised helical flow. The suggestion is that the blood is not behaving as a simple Newtonian fluid at all but that its movement is largely determined by electromagnetic forces, with spiral formations being typical of the interaction between charged particles and magnetic fields.
So the blood is forming toroidal structures known as ‘scalar waves’ which are electrically structured and largely self-sustaining and highly energy-efficient. The performance of such structures allows for the blood to actually accelerate as it comes out of the heart and to propel itself along the arteries.
Short video shows similar structures in sea water.
Here we see luminescence in sea water that is attributed to plankton. Maybe, but waterfalls have demonstrated the same phenomenon and Viktor Schauberger claims to have reproduced the phenomenon in a laboratory.
Konstantin Meyl gives the following hypothesis: Small vortices created in the water act as receivers for solar neutrinos and then release the energy as photons. Neutrinos are already in the form of a charge-vortex (right), making their absorption into similar structures highly plausible.
This would be a good explanation then for Schauberger’s observation of ‘negative resistance’; the water flow is already friction free and is absorbing additional energy through neutrino transduction.
Time to consider the possibility that vortical eddy currents in arterial blood will also absorb neutrino energy thereby magnifying their own action and helping propel the blood cells through the capillaries.
The Influence of the Golden Ratio on the Erythrocyte
This paper from Purcell and Ramsey claims that red blood cells are constructed according to the Golden Ratio.
If in addition we have an electric current flowing around the blood cell, we have precisely the conditions for the generation of stable scalar waves. The blood is energised by an electric field in the heart and toroidal currents are maintained by input from neutrinos.
The capillary problem. Fluid flow at small scales is profoundly different from that of macro scale flow. Here, viscous forces completely dominate the flow dynamics, making even distilled water seem, not so much like honey, but thick warm tar.
Scott Turner explains:
Intuitive ideas of fluid flow arise from observations at the macro level and do not translate well to the micro-cosmos. Any explanation of capillary flow emerging from such intuitions must be considered invalid.
So how are blood cells squeezing through capillaries that are smaller than themselves? The idea that pressure generated a billion cells away can do this without exploding the intermediate arteries is a very big stretch of the imagination.
If our new toy is toroidal waves fuelled by neutrinos then it is time to consider this surely?
Scalar waves were described above as created from physical matter (water) but with some electrical properties which help organise the matter into spiral flows and toroidal rings. Meyl, however, in his book Scalar Waves, describes them as stable dynamic states within the electromagnetic field itself with no need for any supporting material substance.
These waves can exist by themselves and carry energy and information around biological systems. One German scientist documents how they can apparently organise physical conduits within the cells in order to facilitate communication and then dismantle them when no longer necessary.
If blood is observed by competent scientists to ‘propel itself’ around the capillaries then maybe such field waves are implicated here, after all: what else is there?
Capillary flow (Pollack) The video clip from Gerald Pollack of water flowing autonomously through a tube (left to right) can be found here. “Unending flow through the tube; it can go on for a full day” – Pollack
We see water with no discernible source of power flowing steadily into a tube. The phenomenon only works with hydrophilic tubes and is enhanced by application of infra-red or ultra-violet light.
The whole phenomenon is self-organising, with the ability to absorb external energy in the form of electromagnetic radiation and to use it as ‘fuel’
From chaos to order in active fluids – Morozov. Wu et al
In this paper, some biological substances including ATP were mixed with water and the resulting solution placed in small tubes and cylinders.
The fluid spontaneously organised itself into vortices, the vortices oriented themselves with respect to each other and then the whole thing started to move in a single unidirectional flow.
The diameter of the tubes determined the speed of flow and small notches made in the cylinder walls could be used to control the direction of the flow.
The flow then is not driven by a pressure gradient and the author asks if the idea of ‘pressure’ even makes sense in such liquids.
We know that the heart creates vortices of blood at the scale of centimetres but this paper now suggests that at lower scales, i.e. in the capillaries, there is scarcely a need to mechanically shape the blood flow as it seems quite capable of organising its own affairs.
In dogs whose heart has been stopped, the blood continues to flow for up to a hour. This blood is clearly not being driven by the heart but by some residual energy left in the bloodstream that continues to organise and implement flow independently of whatever catastrophe may have occurred elsewhere.
The venous problem. If the blood is pumped around by pressure alone then the rather slow flow emerging from the capillaries will speed up into the veins and eventually emerge as a flow into the heart that seems as fast and vigorous as the flow exiting the heart after being pumped.
Venous flow is claimed to be by muscular contraction, with the valves preventing return flow, but if the flow into the heart is a fast as claimed then there is surely no return flow to mitigate against? And what happens when we are asleep or bedridden?
The valves may well prevent return flow but they also serve the purpose of restoring vortices to the blood which in turn can now transmute ambient vortex energy to kinetic propulsion of the blood.
In addition, the construction of scalar waves will give the blood flow a specific direction. It is no good postulating some sort of energetic input to the bloodstream without both a description of a mechanism of how flow is generated and a way of determining the direction of that flow. The idea that scalar waves are produced by venous valves fits these requirements precisely.
Branko Furst: “Experimental and phenomenological evidence suggest .. that the blood possesses autonomous movement sustained by the metabolic demands of the tissues at the level of microcirculation”. So the blood flow is regulated and physically caused by the actual demand for the blood flow and this happens at the capillary level!
What are we to make of this? The body is making its own requests for blood flow at the cellular level, with each portion of capillary making a small contribution to the overall blood flow. We can suppose that requests for extra blood are made by the transfer of scalar waves somehow from organ to capillary and that these waves will then absorb neutrinos and maybe actually help drive the blood through the capillary to the required degree.
So running upstairs leads to surplus heat ‘waste’ (vortex energy) in the muscles which spirals inwards towards the capillaries, becoming more concentrated as it does so. This energy passes through the capillary walls and has an effect similar to the application of infrared light. The overall blood flow increases with this extra energy and the heart beats faster as an end result of the increased blood flow, not as an initial cause of it.
The blood enters the heart as fast as it exits (this must be the case anyhow) – so is the heart causing the blood to flow or is the flow causing the heart to pump? The speed of the blood entering the heart cannot be caused by blood that was pumped out of the heart and around the body as the two flows are decoupled from each other causally by both the capillary and venous blood flow, neither of which are dependent upon arterial blood pressure.
Is the heart sucking up the venous blood (opinions seem to differ on this) or is the blood simply moving by itself with the beating heart simply interpolated in the middle and acting as some sort of regulator?
“Pulmonary circulation is the system of transportation that shunts de-oxygenated blood from the heart to the lungs to be re-saturated with oxygenbefore being dispersed into the systemic circulation.” – NIH So blood flow out of the lungs is via dispersal. This really is avoiding the question.
Konstantin Meyl (here) points out that there is considerable energy stored in water vortices in humid air and that these contribute a significant portion of the energy input to the human body. There is less energy in the air we breathe out than the air we breathe in.
So it is possible then, that scalar waves from the air we breathe directly enter the pulmonary blood flow and make a contribution to the circulation. This makes a lot of sense with an increase in breathing leading increased energy input which then directly causes an increase in circulation. The requirement is determining the physiology as described by Furst. See also: Do we breathe oxygen?
How else is blood assumed to flow away from the delicate lung capillaries? Is there really enough pressure maintained here to continue pumping the blood all the way around the body? Is this local pressure somehow micro-managed by the beating of the heart according to demand? That really would be a miracle!
Are conditions such as legionnaire’s disease and pneumonia largely the result of bad pulmonary circulation as a result of lack of fresh (energised) air? See: What causes pneumonia?
The Yin and Yang symbol is not far off a stylised depiction of a scalar wave and descriptions of Qi energy tie it closely to the blood: “Blood nurtures and supports Qi, or the body’s life force; in turn, Qi supplies the power, intelligence, and messages to propel Blood into all the physical structures where it’s required “
“Blood is the mother of Qi; Qi is the commander of Blood”
“Blood is the material substance that courses through our veins. But without the messages and wisdom of Qi and the power of its flow, we could not live. Without a sufficient quantity and quality of Blood, at the physical level, you cannot create Qi. All the body, mind, and spirit actions you perform in your daily life depend on the value and quality of Blood and Qi. The quality of your Qi helps Blood flow properly throughout your body.” – TCM
A very clear dependent duality here that mirrors the scalar wave theory of Konstantin Meyl
In 1901 in Calumet, Michigan, two vertical mine shafts were dug 1.5 km apart and 1.3 km deep. tunnel linking the bottom of the two shafts was constructed and its length was measured. Now since the Earth is spherical we would expect the linking shaft to be 26.5 cm shorter than the 1.3 km.
It was a bit of a surprise then to discover that this shaft actually measures 20.9 cm longer than the 1.3 km measured at the surface.
Konstantin Meyl’s assertion is that the measurement wire has actually shrunk owing to the increased strength of the tangential component of the local (magnetic) field thereby making the measured distance seem larger by comparison. (Scalar waves page 273).
Theoretical calculations suggest a shrinkage of 40.0 cm. Calculations from experimental data give a shrinkage of 40.7 cm . Not exact but still much better than the original expectations.
The measured distance cannot be extrapolated to the centre of the Earth but instead when plotted, converges to to some point way outside of the Earth’s surface.
This is a good confirmation of Meyl’s field theory. We have measurements here made with piano wire and very basic geometry is used to make the calculations. The speed of light is not involved and no fancy interferometers are needed to measure ultra small distances. Classical physics is way off. Meyl is correct.
W are accustomed to using measuring devices for time and length that are assumed to be immutable in nature, giving the same results wherever we are in the universe and in what direction we are oriented. It is not so.
“The newest definition of the metre acts as a blow for liberty and thus marks the abyss, at which we are standing” – Meyl
Konstantin Meyl describes a Unified Field Theory that eluded Einstein. Relativity is hard to understand but Meyl is harder, lacking even the comforting ideas of foundational space and time upon which to anchor the laws of physics. Instead, movement of field forces forms the foundations of physics, with space and time being emergent properties of these movements.
In his famously banned Ted Talk, Rupert Sheldrake mentions that the speed of light slowed down by about 20km/s between 1928 and 1945 before resuming its approved value.
The response of the standards authorities was to simply re-define the length of the metre in terms of the speed of light so as to correct for the difference, thus keeping the speed of light constant as required by the theory of relativity.
This is fine from the point of view of relativity, which views the speed of light as a fundamental constant but does in fact allow both length and time to vary according to local conditions.
So why did they change the definition of the metre and not the definition of the second? Why not consider that time may have sped up which makes it seem that it is taking longer for light to move from one place to another?
What does it even mean that time is going ‘faster’, and what sort of physics is it when we have an actual choice over which variables we consider to be ‘fundamental’ and which are the ones that are derivable from the others?
Time is measured via atomic clocks. The frequency of some sort of oscillation is measured via statistical means and the time elapsed is calculated from this frequency: “After exactly 9,192,631,770 oscillations, a second has passed.”
So we are not measuring time directly and cannot therefore say that it is a fundamental property of the universe. We are defining a ‘second’ loosely speaking as “The number of things that have happened since the last time I checked“.
This got me to thinking that we should be regarding ‘something else’ as fundamental and then defining ‘time’ in terms of that ‘something else’.
We can try regarding ‘frequency’ as fundamental which sounds promising as it is precisely what is measured via atomic clocks; they use the phenomenon of ‘resonance’ to measure frequency. Once we do this we can then calculate elapsed time as above by counting oscillations and dividing by the frequency.
Frequency = Cycles per Second (definition) so Number of cycles = Frequency (multiplied by) Time elapsed (rearranging) therefore Time elapsed = Number of Cycles (divided by) Frequency
The assumed model by which ‘frequency’ is produced however is via vibration (i.e. movement) of atomic particles within space and time, so it seemed to me that we are back to space and time as fundamental. This is intuitively comfortable but doesn’t address the issue of why it is the speed of light that can be fixed as constant if it is space and time that are considered fundamental.
A big chord was struck for me then upon listening to Konstantin Meyl explain his ideas:
I think most people will take space and time for granted as fixed, immutable properties of the universe, within which all activity (movement) takes place but Meyl turns this all around to make things somewhat counter-intuitive but at the same time more consistent.
There is only The Field:
It is this field that completely determines the nature of space and time.
Matter is comprised of toroidal field vortices.
Field strength determines ‘distance’ and the speed of light (field propagation).
Gravity is an illusion, an emergent property of field geometry.
Einstein’s E = mc2 is incorrect
The field is electromagnetic in nature in that is has dual components which create each other via relative movement. Magnetic forces arise from movement relative to charge and similarly, a charge field arises from movement relative to a magnetic field.
Electricity and magnetism are not separate forces in Meyl’s field and are just components of the same entity. Therefore, the field properties arise from movement of the field relative to .. itself.
Meyl: “Without movement, there would be no forces or energy .. nothing”
And: “Which brings us to the question: ‘What is movement?‘”
To clarify (or maybe not), classical physics imagines all movement taking place in an already existing space. It supposes that such a thing as an empty vacuum can exist, does exist and comes ready made with all the requisite properties needed in order to host and propagate electric or gravitational fields.
Konstantin Meyl
Einstein’s relativity is a little more flexible, viewing gravity as a deformation of space itself by the matter contains within it. The matter then moves according to the curves in space created by the matter itself. Space and matter are still separate but act upon each other somehow: “Matter tells spacetime how to curve, and curved spacetime tells matter how to move” – John Wheeler
Within Meyl’s universe there is no separation of space, time and matter; there is only he Field. It is the configuration and movement of this field that creates the stuff we know as ‘matter’. The forces which appear to act upon the matter are really just emergent properties of the field acting upon itself. It is the field strength and ‘direction’ that determines the apparent metrics of distance and time, not the nature of space causing the field to behave differently.
The field itself is the primal cause, not a pre-existing space-time universe or a monstrously crude and fantastical Big Bang a few billion years ago.
Measurements of the speed of light for example can now be seen for what they are which is to say, transformations of various sets of observed field phenomena to some (almost arbitrary) common basis so that a comparison can be made in order to say “This is the same as that” or “This measurement is greater than it was last week” etc.
Ernest Rutherford
The measuring instruments themselves and the human observers using them are all themselves field phenomena and are therefore subject to the same rules and irregularities. This, according to Meyl, is the explanation for the results of the Rutherford experiment whereby the speed of light appeared constant no matter what direction it was travelling in or at what speed the Earth was travelling through space.
Light, Earth, equipment and observer all inhabit the same local reference frame and all are subject to the same influences. As the equipment shrinks so the speed of light slows and the two effects compensate for each other thereby appearing to remain constant. Atomic clocks may well change their behaviour but our subjective experience of time also follows the rules of the Field and so nobody notices.
So when we have laboratory set-ups where subject, equipment and observer are all part of the experiment, how can we do objective science? The situation is similar to that of relativity where there is considered to be no global frame of reference and so all experiments can only reflect local laws and conditions.
Meyl, however, prefers to construct a global (absolute) frame of reference within which to perform calculations. Measurements from a local experiment are transformed to this (theoretical) global framework, where calculations are performed before transformation back to the local experimental conditions.
Importance.
Is all this just theoretical sophistry or is there any practical use for this? Does this help with existing results that currently defy explanation?
One place to look may be experiments that give different results dependent upon whereabouts they are in the universe. We wouldn’t usually expect atomic clocks to be affected by subtle changes in gravitational fields. However, something like this appears to have happened in the experiments of Simon Shnoll, where biological, chemical and purely physical phenomena show results that vary in a cyclic fashion seemingly dependent upon the configuration of the solar system.
Piccardi and Kaznacheev similarly found many anomalies that depended upon season, lunar cycles and even eclipses.