Evidence against concordance cosmology

Just curious RC. How do you know that?
JeanTate gave a good answer. GR works, we assume that it applies to the entire universe and GR is non-Euclidean.
Or we can ask the question: How do we get the recent detection of gravitational waves (only possible in a non-Euclidean universe as far as I know) from the merger of 2 black holes in a Euclidean universe?
 
Can the geometry on large scales in a universe in which GR is true be Euclidean or very nearly Euclidean?
 
Can the geometry on large scales in a universe in which GR is true be Euclidean or very nearly Euclidean?
As I understand it - and I'd be happy to see a good explanation as to why this may be wrong - if the universe, at large scales, contains mass, then it is non-Euclidean at those scales.

Of course, to be clear, we'd need to agree on what, exactly is meant by "very nearly". :)
 
Can the geometry on large scales in a universe in which GR is true be Euclidean or very nearly Euclidean?

In an expanding universe with matter: no, not at all times. Matter is always contributing negative curvature. There can be a positive curvature (from geometry or from vacuum energy) which cancels this out to yield zero curvature at some particular matter density, but in a Universe where the matter density is always changing, you can only ever pass momentarily through that cancellation-point on the way from one sign to the other.

Einstein's self-proclaimed "biggest blunder" was a zero-curvature solution where a static matter density canceled a cosmological constant, and stayed there with no expansion/contraction of the matter density. It's not what our actual universe is doing, plus if it were it'd be unstable.
 
Ok, so just to be clear, I'm not an advocate for Eric Lerner's cosmology - on the contrary. Also, I completely accept that the presence of mass concentrated into stars, galaxies, clusters and super-clusters results in a non-Euclidean space near those bodies. However, the question is whether, on a cosmological scale, when we look across the observable universe, the geometry is curved (non-Euclidean) or flat. That is an empirical question. And the answer is that empirically, the universe is flat or nearly flat (omega_K) < 0.005 according to Planck 2015. So, to paraphrase RC, he says to Lerner, your model has a Euclidean geometry and the universe is obviously and trivially non-Euclidean, so sucks yah-boo. But I think that the overall geometry of the universe can be, and empirically is, Euclidean, given GR.
 
The question is not a local but a cosmological one.
Gravitational waves from 1.3 billion light years away leads to the question.
How do we get the recent detection of gravitational waves (only possible in a non-Euclidean universe as far as I know) from the merger of 2 black holes 1.3 billion light years away in a Euclidean universe?

Or: How can we explain the amount of gravitational lensing by galaxies out to high z in a Euclidean universe?
 
And the answer is that empirically, the universe is flat or nearly flat (omega_K) < 0.005 according to Planck 2015.
Not completely right, hecd2. The answer is that fitting Planck 2015 data to the Lambda-CDM model gives omega_K < 0.005, i.e. a nearly flat expanding universe containing dark matter, dark energy and inflation.
The Lambda-CDM model is explicitly non-Euclidian. A flat spacetime is an approximation to a Minkowski spacetime (once again non-Euclidian).

So the "sucks yah-boo" is two fold.
  • A static Euclidian model ignores the existence of General Relativity.
  • A static Euclidian model is not even realistic for a flat universe (ignores Special Relativity).
 
Last edited:
Ok, so just to be clear, I'm not an advocate for Eric Lerner's cosmology - on the contrary. Also, I completely accept that the presence of mass concentrated into stars, galaxies, clusters and super-clusters results in a non-Euclidean space near those bodies. However, the question is whether, on a cosmological scale, when we look across the observable universe, the geometry is curved (non-Euclidean) or flat. That is an empirical question. And the answer is that empirically, the universe is flat or nearly flat (omega_K) < 0.005 according to Planck 2015. So, to paraphrase RC, he says to Lerner, your model has a Euclidean geometry and the universe is obviously and trivially non-Euclidean, so sucks yah-boo. But I think that the overall geometry of the universe can be, and empirically is, Euclidean, given GR.
I think there may be some confusion over what "Euclidean" means.

One way to think about this: what is the distance between here and some distant quasar?

In an Euclidean universe, the 'luminosity' distance is the same as the 'radar' distance, is the same as the ... (you get the idea).

So, if even a pair of these distances is different, then the space cannot be Euclidean, right?

If you plug in a 'big' redshift - 2, say - into Ned Wright's Cosmology Calculator, and chose "flat", you find that the light travel time, the comoving radial distance, angular size distance, and luminosity distance are all different. Significantly so.
 
Ok, so just to be clear, I'm not an advocate for Eric Lerner's cosmology - on the contrary. Also, I completely accept that the presence of mass concentrated into stars, galaxies, clusters and super-clusters results in a non-Euclidean space near those bodies. However, the question is whether, on a cosmological scale, when we look across the observable universe, the geometry is curved (non-Euclidean) or flat. That is an empirical question. And the answer is that empirically, the universe is flat or nearly flat (omega_K) < 0.005 according to Planck 2015. So, to paraphrase RC, he says to Lerner, your model has a Euclidean geometry and the universe is obviously and trivially non-Euclidean, so sucks yah-boo. But I think that the overall geometry of the universe can be, and empirically is, Euclidean, given GR.

There are three different terms in the Einstein field equation that can contribute to the Ricci curvature tensor. There is matter (omega_M), there is vacuum energy (omega_L), and there is "scalar curvature" (omega_K). You are right that Planck shows the scalar curvature to be near zero---a flat underlying geometry. But that is different than saying spacetime is flat! Planck shows that the spacetime is not flat, and that the curvature is due to the matter and cosmological-constant terms rather than the scalar term.
 
There are three different terms in the Einstein field equation that can contribute to the Ricci curvature tensor. There is matter (omega_M), there is vacuum energy (omega_L), and there is "scalar curvature" (omega_K). You are right that Planck shows the scalar curvature to be near zero---a flat underlying geometry. But that is different than saying spacetime is flat! Planck shows that the spacetime is not flat, and that the curvature is due to the matter and cosmological-constant terms rather than the scalar term.
omega_K is identically equal to 1 - omega_L - omega_M and of course it is a scalar.

ETA: And of course I accept that flat space does not necessarily mean flat spacetime, which is clear from the EFE, but we haven't been talking about spacetime but about the geometry of space.
 
Last edited:
<snip>

ETA: And of course I accept that flat space does not necessarily mean flat spacetime, which is clear from the EFE, but we haven't been talking about spacetime but about the geometry of space.
Thanks for that, it's an important clarification.

hecd2 said:
JeanTate said:
If you plug in a 'big' redshift - 2, say - into Ned Wright's Cosmology Calculator, and chose "flat", you find that the light travel time, the comoving radial distance, angular size distance, and luminosity distance are all different. Significantly so.
Of course. All of this is a consequence of expansion in a flat universe.
But not of expansion in a 'flat space universe', one with Euclidean geometry, right?

For example, in a Euclidean (space) universe which is expanding, you the light travel time and luminosity distance to be different (or do you?), but you can't get 'time dilation' effects in the light-curves of supernovae (standard SN1a ones), can you?
 
Einstein's self-proclaimed "biggest blunder" was a zero-curvature solution where a static matter density canceled a cosmological constant, and stayed there with no expansion/contraction of the matter density. It's not what our actual universe is doing, plus if it were it'd be unstable.
That's true apart from the highlighted. Einstein's static solution is not zero-curvature. In fact, the three spatial dimensions of that solution are a hypersphere, hence non-zero curvature is present even before you take time into account, hence Einstein's static solution is non-Euclidean even in the spatial dimensions.

ETA: And of course I accept that flat space does not necessarily mean flat spacetime, which is clear from the EFE, but we haven't been talking about spacetime but about the geometry of space.
I wasn't sure whether you were asking about the geometry of spacetime or space. Spacetime is clearly non-Euclidean.

Space might be approximately Euclidean in the very large, and that's consistent with observations, but space can't be Euclidean at small scales in the presence of inhomogeneous distributions of matter. Gravitational lensing, for example, is an observed example of non-Euclidean space.

All of the above assumes GR is substantially correct, of course. In most threads that would go without saying.
 
Here's what I found:

<snip>

He also far too low in local stars: Portinari, Casagrande, Flynn (2010)
Portinari, Casagrande, Flynn (2010): "Revisiting ΔY/ΔZ from multiple main sequences in globular clusters: insight from nearby stars"

The figure in Eric L's presentation seems to be Portinari, Casagrande, Flynn (2010)'s Figure 5.

LCDM predicts 3x too much DM: I.D. Karachentsev, Astrophs. Bull. 67, 123-134
Karachentsev (2012): "Missing dark matter in the local universe"
The paper itself is behind a paywall, however there is an arXiv preprint. The figure in Eric L's presentation is ~the same as Karachentsev (2012)'s Figure 4.

>200 Mpc LSS takes far too long to form for BB: Clowes+ (2012)
Clowes+ (2012): "Two close large quasar groups of size ˜350 Mpc at z ˜1.2"

There's also Clowes+ (2013), which ben m cited: "A structure in the early Universe at z ˜1.3 that exceeds the homogeneity scale of the R-W concordance cosmology"
The figure in Eric L's presentation seems to be Figure 2 in Clowes+ (2013), not anything in Clowes+ (2012).

<snip>

Free Parameters exceed measurements: Disney? (voiceover, not slide)
I did not try to track this down.
However, ben m seems to have done so:

ben m said:
Oh, wait, I recognize that one. It's got to be Mike Disney, "The Case Against Cosmology", http://arxiv.org/abs/astro-ph/0009020
As far as I can tell, the figure in Eric L's presentation is not in Disney (2000).

Have I correctly identified the references, Eric L?
Have I?
 
Eric L:

The alternative hypothesis-- that the universe is not expanding and the Hubble relation is due to energy loss that happens to the light as it travels-- makes the prediction that surface brightness of objects (as measured in AB magnitude—in other words per unit frequency) is constant with distance. To test that hypothesis for objects of the same intrinsic luminosity, however, you need to assume an actual relation between redshift and distance. My colleagues and I assumed z, redshift, is linearly proportional to distance at all distances (as we know it is at small z).

It seems to me that this claim must be supported in several ways. Where does the energy loss go? What is the mechanism for this energy loss? Is there any experimental or observational evidence for this energy loss? Is there any theoretical support, say, within quantum field theory, for this energy loss?
Can you provide a proposed theoretical framework to pursue in order to account this energy loss?
 
Thanks to Jean Tate and Ben m for posting the actual references. Not sure why they are not in the slides—might be the wrong version of the slides went into the video. Hope you found some other interesting stuff in digging these up! Anyway, I will post the missing ones here as soon as I get a moment—maybe tomorrow.

I will limit my replies to comments on my initial point-- concerning only the data sets dealing with surface brightness, size and luminosity. I’ll get to other points as we go along, but this will structure the discussion.

I started with the Tolman test in part because it applies to ANY expansion that is hypothesized to account for the Hubble relation. It can be LCDM or anything else. No, Ben m the formula (1+z)^3 makes no other assumptions about the expansion—Tolman derived it back in 1930 from basic geometric arguments that apply to ANY expansion from any cause and with any dynamic, so long as this is the cause of the redshift-distance relation. That is what makes the Tolman a very powerful test for whether the universe is expanding or not. The Big Bang, inflation , LCDM and all that require additional hypotheses, not just expansion, so expansion of the universe is the most general question. To make additional predictions beyond the Tolman test you need additional hypotheses beyond just expansion.

My colleagues' and my hypothesis requires no assumption about mass at all. It hypothesizes NO expansion of the universe, that the redshift is not due to expansion, but to something that happens to EM radiation as it travels. Specifically the photons redshift is such a way that z is proportional to distance traveled. No, we don’t have a physical mechanism for that—it would be a new phenomena, a modification of electromagnetism at large distances. No, we don’t need a mechanism to postulate a quantitative relationship. Many quantitative relationships, like the Balmer formula or Planck Blackbody formula were discovered before the physical process behind them were, and the formulae were clues that led to the physical phenomena. In this case, we are extrapolating the formula that we know applies for smaller distances to all distances.

No, Ben m, this linear redshift-distance relationship does NOT correspond to the expanding universe solution with no mass or dark energy. Check it out with my “good pal” Ned Wright’s handy calculator. Here’s how to do it: Take Wright’s value for “luminosity distance” , divide by (1+z)^0.5 (this takes into account that bolometric luminosity decrease with the redshift in the energy of light itself, no matter what the cause) .

Plug in the numbers with your zero, zero options and compare the results to a straight line (distance proportional to z) —they don’t fit.

Now if we just ask instead ”what combination of LCDM parameters gives us the closest fit to a straight line for the redshift range (z<1.5) covered by the SNIa data?” Then we get not 0,0 but omega m=0.26 and omega lambda = 0.74. Amazing! Without looking through a telescope, or even out our window, with pure mathematics we can determine that: IF the universe is really not expanding and redshift is proportional to distance than LCDM cosmologists will conclude that there is 74% dark energy and 26% dark matter. What a coincidence! (Try this yourselves—I just did this tonight. Lots of fun!)
 
Thanks for doing Eric's job for him, JeanTate.

The paper itself is behind a paywall, however there is an arXiv preprint. The figure in Eric L's presentation is ~the same as Karachentsev (2012)'s Figure 4.

OK, this is another easy followup. Yes, the local group does seem to be below the average cosmic density ... surprising no one. LCDM straightforwardly predicts that the Universe has wildly varying local density, with galaxies found in superdense clusters and thin filaments and in near-voids; in fact, finding yourself in a region of "the average density", is an atypical experience. (A similar example: how many Americans live in a county with the national average density of 90 people per square mile?)

Anyway, http://adsabs.harvard.edu/abs/2014MNRAS.445..988N runs the detailed calculation. Yes, local-underdensities like the observed local one are perfectly common outcomes of LCDM initial conditions and normal GR evolution, and particularly common for spiral galaxies.
 

Back
Top Bottom