Null Physics White Paper
Thanks for the welcome. Let me separate my responses into matter and cosmology.
MATTER
My particle/matter physics requires a great deal of background, and it’s easy to sound a bit “crankish” when you try to condense a 480 page book, developed over 30 years, into sound bites. But here goes.
Most of our current physical theories are constructionist, building mathematical models from empirical data. They provide, naturally, great correspondence to observed phenomena, since they are based on same, but give us no insight into the foundational nature of the universe because they lack natural philosophy. Relativity, conversely, is based on a few simple principles, but even these don’t give us much insight because they are reasonable extrapolations of our observations of the natural world. Relativity, for instance, can’t tell us WHY the speed of light is constant in any reference frame or WHY matter generates a gravitational field.
Null Physics attacks the problem from the other pole, starting with the toughest question of all: “why does the universe exist?”. Its premise is that if you can’t provide a rational, complete answer to this question, your physical theories will always contain gaping philosophical holes, and you will forever be unable to explain the universe to any great depth.
Null Physics is based on a geometry that is a solution to an equation whose sum is zero. In short, space is a zero-sum equation of the form 0 = 0 + 0 + 0…, and energy and matter are curvatures of space. So we have nothing, in the form of a summation of what appears to us as geometric points, and then we have positive and negative curvatures of space, which constitutes matter and energy. Since space is composed of nothing, in the form of geometric points, and curvatures of space are displaced points, the sum of the equation remains zero. (I can’t recall the theorist, but quite a while ago there was an attempt to unify EM as a fifth dimension). Since the time of Lucretius (or before) the big question has always been “how do you derive something from nothing?” This leads naturally (although admittedly counterintuitively) to the realization that something has to be composed of nothing for there is no other available source.
So while cosmologists try to reconcile the conservation of energy by claiming that the universe’s negative gravitational potential offsets its matter/energy, they fail to realize the most important thing: there is no difference between a universe whose sum is zero and a universe that is, intrinsically, a formulation of zero. I’ve left out a tremendous amount of supporting theory and rationale, but that’s the long and short of it. The universe is infinite and eternal because it is an equation whose physical sum is nothing, and nothing is by definition unbounded. The “universe equation” is not a wave function, as has been posited various places; it is a geometry.
A unique aspect of this geometry is that an infinite space of N dimensions has a finite size in N+1 dimensions. An infinite line, for instance, has a finite area. Think of it as cutting a line into an infinite number of segments and stacking them on top of each other at infinite density. The result is not an infinite area, as that is a plane; it is not infinitely small, as that is a line segment. The result is finite. In fact, if we consider the width of a line as 0, then in accordance with the poles of the Riemann sphere, (0*infinity) = 1. The only difference Null Physics requires is that in the physical case, the 1 in this equation is an area (1^2), not a length. In the same way, the infinite space of our universe can be partitioned into an infinite number of cubes, that when stacked upon each other in the fourth dimension, result in a finite hypercube. Infinite three-dimensional space corresponds to finite four-dimensional space. This finite four-dimensional constant is what Planck’s constant is. It is also responsible for unit elementary charge, etc. Universal constants have always presented quite a difficulty because space would appear to have no place to “store” them, and even if their origin is posited in a universal creation event, there is no reason for their values to remain fixed as the nascent universe evolved. Indeed, how can Planck’s constant have the same value in galaxies billions of light years away as it does here on Earth? String theory attempts to resolve this difficulty by positing a hyperdimensional substructure for space, but this does not solve the problem because there is no constraint for this structure to be fixed throughout space. My theory resolves the universality of governing constants because it has only one: the four-dimensional size of infinite space, and this is by definition the same everywhere in infinite space. I was able to calculate this size and it is equal to 3.16(10)^-26 J-m, and it is called “unit hypervolume”. Physically, it’s a hypercube whose edge length is about 0.1 mm. It is the connection between the macro and micro universe and the quintessential definition of finiteness.
Our universe has four and only four dimensions, three of space and one of time. It contains two, dimensionally unique three-dimensional substances: space, whose units are of course distance^3, and energy, whose fundamental units are time-distance^2. The reason why it is possible to have finite energy density in space is because both have the same dimensional size. Our universe has space and curved space, nothing else. Everything within it can be described as some combination of its four dimensions.
Since unit hypervolume is a four-dimensional finite, it represents the one and only bounding condition for anything, in particular energy. This is why Planck’s constant has units of J-m, and why it is associated with the quantization of energy. Joules, as energy, is three-dimensional, and meters are one dimensional, for a total of four dimensions. Planck’s constant isn’t exactly equal to space’s four-dimensional size, the proportionality between the two is 2PI, but that’s a detail I needn’t address here. Elementary particles, and by this I mean electrons (positrons) and protons (antiprotons) are space-time boundaries; essentially four-dimensional “holes” in space, whose size is proportional to unit hypervolume. These holes generate long-range fields that produce the Coulomb and gravitational interactions, and the close-range interaction of these holes are responsible for the Strong and Weak forces. Again, all of this is supported by a wealth of evidence which, given the graphs and equations required, can’t be replicated here. It includes calculations of average nuclear density, white dwarf density, the strength and range of the Strong force, the inter-nucleon spacing of a deuteron, and the maximum material density in black holes.
Mesons, kaons, and particles that decay into electrons (positrons) are essentially high-energy electron states, whereas sigmas, lambdas, and particles that decay into protons (antiprotons) are essentially high-energy proton states. Their instability is caused by the internal presence of bound positive-negative particles in combination with a stable particle. A muon, for instance, is an electron combined with a positive/negative pair. Think of it like an electron combined with positronium at nuclear density. The bound pairs that exist within unstable particles cannot exist singly in nature, like protons and electrons. The neutral pi meson, for instance, decays into two gamma rays because it is a bound particle/antiparticle.
COSMOLOGY
This is the easy part. There is no difference between a universe whose sum is zero and zero (same total), so a universal origin is nonsensical. The lack of universal origin brings into question universal expansion (which according to my sources Hubble never agreed with, but I’m not a historian). Enter tired light. It is interesting that you know someone who worked with Zwicky. Since I felt like I was able to validate his tired light concept, I tried to contact his daughter, Barbarini Zwicky, for the possibility of including a unique quote for Null Physics, but was not successful, and didn’t want to push it out of respect.
Before speaking to tired light, let’s talk about universal curvature. The error in the five-dimensional unification I mentioned above was the failure to realize that the two spatial curvatures cited above, responsible for EM and gravity, both occur within a four-dimensional geometry. Spatial curvature/distortion can occur in one of two ways, either normal to space, along the fourth dimension (resulting in positive and negative EM fields) or along space, within the third dimension, resulting in nonpolar fields. Gravitation is caused by the internal distortion of space, which is in turn caused by the hypervolumetric density required to store energy’s three dimensional volume (time-distance^2) within space (distance^3). Spatial curvature is by definition extraspatial, so even if it occurs along three-dimensional space it is a four-dimensional phenomenon. So the net effect of energy density on space is not to produce a net four-dimensional curvature, it is to produce an average four-dimensional curvature. It is already “generally” agreed that space is probably flat or near flat, which is just what you would expect from its infinite nature, but it is important to interpret its average curvature correctly. Spatial curvature, by definition, has units of 1/r, acceleration. This can be interpreted one of two ways; either statically as a structural deformation or dynamically as an expansion. Sadly, it has been interpreted as the later. So even though Michelson and Morley, and experiments to follow, show us space is not a material substance, the universal expansion has space expanding into larger volumes, expanding over itself.
So what happens if space’s curvature is treated statically? It is still represented as a field of dv/dx, but the dv is not the motion of the underlying metric. Instead, dv/dx is induced in objects moving through it, resulting in the slow expansion of photons over vast distances. This is also why the signals from distant supernovae are broadened. Just as the photon is stretched, so to is the distance between them (I’ve got a great graph in the book). As the photon’s wavelength increases, due to the internal differential velocity to which it is exposed, it loses energy. Since its ensemble motion, because of this internal expansion, is slightly less than c, it behaves like a rapidly moving relativistic particle, and decays, emitting microwaves. These microwave decays, as calculated in the book, fall into the CMB band, and correspond well to the redshift quantizations found by Tifft and Napier.
There’s a lot more, but I really don’t want to spoil the book.
IN CLOSING
Unfortunately, self-published physics books are invariably the product of uniformed, and in many cases, positively deranged individuals. Just as unfortunately, peer-reviewed journals strenuously reject ideas contrary to the reigning paradigms. So rather than fight the battle a little bit at a time, I decided to wait until I had some convincing results and published the results of my work from 1978 to 2004 all at once. So far it’s gone well with the individuals who actually read the book, but after reading Lee Smolin’s new book, “The Trouble With Physics” I fear I might be tilting at windmills with regard to the theoretical physics community.
Thank you for your interest.