Evidence against concordance cosmology

Oh jeez...........here we go again. Are we a black hole? We seem to be attracting an inordinate number of "Einstein-was-wrong" types over our event horizon at the moment, all armed with pseudo-intellectual word soup as the primary weapon, and a noticeable shortage of maths.

At least 6. All completely ignorant of the math and functionally ignorant of the meaning of the words in their proper combinations.
 
Disk that are high-luminosity in UV—the ones we studied—are all young galaxies—babies.
Citation to the scientific literature please, Eric L.
What approximate age are you saying all of the galaxies in your sample are? 1 million years? 1 billion years? 100 billion years?
After all in a static Euclidean universe any age should be "young" compared to infinity :D!
 
Since Eric L mentions the size of galaxies: The Shape of Things by Brian Koberlein.
So is there a way to test whether the ΛCDM model is accurate for our universe, or if some other model might work as well. It turns out that there is, and it’s known as the Alcock-Paczynski cosmological test. The basic idea of this test is to measure two things: the redshift of an object (for which we typically use the ΛCDM model to determine its distance) and the apparent size of the object. Since any model for the structure of the universe will predict a relation between these two quantities, you can use these quantities to test the accuracy of your model.

A recent paper in the Astrophysical Journal (arxiv version:http://goo.gl/xe4vwL) has done just that. Using redshift and apparent size data for distant galaxies, the authors test six models that represent a wide range of possibilities: ΛCDM, Einstein-de Sitter (a model with no dark matter), Friedman model (no dark energy), quasi-steady state cosmology (no big bang), a static universe model, and static universe with “tired light.”

What the authors found was that only two models agreed with the Alcock-Paczynski test, the ΛCDM model and the tired light model.
And other tests rule out tired light models: I’m Tired…
In addition there are the Lubin and Sandage papers
  1. The Tolman Surface Brightness Test for the Reality of the Expansion. I. Calibration of the Necessary Local Parameters
  2. The Tolman Surface Brightness Test for the Reality of the Expansion. II. The Effect of the Point-Spread Function and Galaxy Ellipticity on the Derived Photometric Parameters
  3. The Tolman Surface Brightness Test for the Reality of the Expansion. III. Hubble Space Telescope Profile and Surface Brightness Data for Early-Type Galaxies in Three High-Redshift Clusters
  4. The Tolman Surface Brightness Test for the Reality of the Expansion. IV. A Measurement of the Tolman Signal and the Luminosity Evolution of Early-Type Galaxies
  5. The Tolman Surface Brightness Test for the Reality of the Expansion. V. Provenance of the Test and a New Representation of the Data for Three Remote Hubble Space Telescope Galaxy Clusters (a 2010 follow up by Sandage)
 
Last edited:
Maartenn 100 : science does not do belief. For that is an article of faith requiring zero evidence. But what it does do is to provide testable
hypotheses to determine their validity with regard to the behaviour of observable phenomena. So new evidence will be added to the body
of knowledge to gain a more accurate understanding.
And this applies to string theory how?
 
Can one of you explain to everyone how a model (expanding-universe) that requires four ad-hoc parameters to fit these data sets is superior to a model (non-expanding universe) that requires no free parameters to fit the same data? Science is about prediction. Any set of data can be fit by any model post-hoc, with enough free variables. To be useful, a scientific theory must be able to predict data ahead of time, not just fit the data afterwards with an every-expanding list of free parameters.

Sandage’s papers , which are addressed in our paper, set up a tired light theory with a wrong relationship of redshift to distance, not a linear one. As we show, with the right, (linear), relationship, his data is a good fit to our non-expanding model—with no free parameters.
 
Can one of you explain to everyone how a model (expanding-universe) that requires four ad-hoc parameters to fit these data sets is superior to a model (non-expanding universe) that requires no free parameters to fit the same data?.
Can you explain why you are derailing your own thread into about the evidence against concordance cosmology into the irrelevant topic of alternative cosmologies, Eric L :jaw-dropp?

But if you want an answer: The question is based on ignorance or maybe even a lie.
Frequently Asked Questions in Cosmology: at is the evidence for the Big Bang?
Static models like the Steady State Model do not "fit the same data". They fail to fit most of the data.

ETA: Your irrelevant paper is also a tired light model. You evoke a unspecified mechanism to replicate Hubble's Law. That is what tired light does.
 
Last edited:
<snip>

I haven't yet had a chance to watch it, but I do have a question about the sources you certainly used in preparing this!

Can you please list the primary sources you relied upon (other than your own, recent, paper)?

<snip>
I've now watched the presentation, and I must say that I wasn't very impressed (as several others have reported too).

Here are the references I noted, not including the SB test (for which the main reference is your recent paper, Eric L):

Li declines with Fe, < 0.03 BBN prediction: Sbordone+ (2012), Hansen+ (2015)

He also far too low in local stars: Portinari, Casagrande, Flynn (2010)

LCDM predicts 3x too much DM: I.D. Karachentsev, Astrophs. Bull. 67, 123-134

>200 Mpc LSS takes far too long to form for BB: Clowes+ (2012)

CBR alignments (etc): (no refs)

Evidence indicates scattering/abs of RF radiation in local universe: Atrophys & SS, 1993

Free Parameters exceed measurements: Disney? (voiceover, not slide)

These are the refs which seem to relate to the topic of this thread, evidence against concordance cosmology.

There are also several mentions of an alternative, plasma cosmology. And there's at least one ref given for that. From what I understood, there's little difference in the alternative in this presentation from what's in the (91-page long!) Plasma Cosmology thread, here in ISF (other than the recent SB paper).

Have I copied the references correctly, Eric L?

I will try to find the actual papers to which the refs in the presentation seem to refer to.
 
Sandage’s papers , which are addressed in our paper, ....
That is section " 6.2. Lubin and Sandage 2001" where you do not address anything real in the papers :eye-poppi!
You raise two strawmen
  1. There is the inanity of saying that when they used the correct relation for "the Einstein-de Sitter static case" (your words) in 2001 they should tested your 2012 model instead.
  2. Comments about LS01 in terms of your model again.
Ending with an evidence-less assertion about Lubin and Sandage using Sandage and Perelmute data.
To be charitable this is emphasizing that your paper has nothing to do with any evidence for against concordance cosmology because you do not show the Lubin and Sandage papers are wrong.
 
Last edited:
Can one of you explain to everyone how a model (expanding-universe) that requires four ad-hoc parameters to fit these data sets is superior to a model (non-expanding universe) that requires no free parameters to fit the same data?
IIRC, this (or something like it) was discussed at some length, in the Plasma Cosmology thread.

A quick answer is that this is an entirely artificial, ad hoc, comparison.

For example, the CMB fit to a 2.73K blackbody, its dipole, and the angular power spectrum: the data are unambiguous, I think (refs: a key COBE paper, the main WMAP papers, and the main Planck papers; I'll provide a list if anyone asks). AFAIK, no one has published a paper showing that "a model (non-expanding universe)" fits the same data. IIRC, there is one paper - by Lerner? - published before WMAP, which shows a weak fit to some of the COBE data, certainly one that's demonstrably worse than a concordance cosmology model.

No "fit the same data" here.

Science is about prediction. Any set of data can be fit by any model post-hoc, with enough free variables. To be useful, a scientific theory must be able to predict data ahead of time, not just fit the data afterwards with an every-expanding list of free parameters.

<snip>
Hmm ... I was under the impression that concordance cosmology models have a remarkably good track record, in terms of predictions. The CMB angular power spectrum, for example, and the rich clusters of galaxies 'discovered' in the Planck and SPT data (via the Sunyaev-Zel'dovich effect; confirmed by optical observations).
 
The hypothesis that the universe is expanding, taken by itself –-that is taking this hypothesis alone--makes very few testable predictions.

"that hypothesis alone" is sort of odd. You can imagine hypothesizing a sort of clockwork universe. "The creator has glued all of the galaxies to mysterious mounting-pegs, and then arranged some unseen clockworks to move the pegs apart according to some formula." Sure, in that case there are very few predictions. But nobody (I hope not you) seems to hypothesize that.

The more sensible hypothesis is "things are moving apart governed by some regular laws of motion". And here your statement is wrong. If you hypothesize that the law of motion is "the usual one", i.e. GR, which seems parsimonious, you get a very tightly constrained world in which to make predictions---indeed, under this assumption, any initial-condition hypothesis you which to make can be easily turned into a suite of predictions.

One very well-known one is that the surface brightness of objects drops as (1+z)^3. Equivalently, it makes quantitative predictions about the apparent size of objects of a given luminosity.

That is not a generic expansion hypothesis. That is a very specific expansion hypothesis---it corresponds to the hypothesis that things are flying apart (to use Newtonian language) without being decelerated by (e.g.) their mutual gravitational attractions. That sounds like the sort of thing we should be testing rather than assuming.

My colleagues and I assumed z, redshift, is linearly proportional to distance at all distances (as we know it is at small z).

What an odd assumption. What actual physics does this correspond to? Do you suppose that gravity is just "turned off" and unable to affect large-scale structure?

This relationship fits the data set of apparent magnitudes vs redshift of the supernova 1a data just as well as the LCDM model does, and it is almost mathematically indistinguishable from those predictions for that data set.

"almost" is doing a lot of work in that sentence. Your expansion history is the "empty universe" one, and yes it's been tested. It's known to be close to the data but it is NOT a match. It corresponds precisely to the "omegaM = omegaL = 0" hypothesis in mainstream cosmology, which is ruled out at high confidence on the supernova data alone. (Note: these contours include systematic errors. If you think it's "almost" a match based on statistical errors alone, you're even more wrong.)

http://supernova.lbl.gov/Union/figures/Union2.1_Om-Ol_systematics_slide.pdf

It however has the Occam’s razor advantage that it fits the data set using only one adjustable parameter—the Hubble constant—while LCDM requires 3 adjustable parameters—H, the density of matter(including dark matter) and the energy density of “dark energy”. If you accept the LCDM model, the fact that the non-expanding model with linear Hubble relation fits just as well has to be considered a big coincidence.

Part of this is the well-known "cosmic coincidence"---why are omega_M and omega_L anywhere in the same ballpark? 0.3 and 0.7? Why aren't they, say, 0.01 and 0.99? Or 10^-9 and 0.9999999? Nobody knows and this is an active debate. But for your purposes, "hey the supernovae don't rule me out too badly" is precisely this one-parameter coincidence. Any cosmology for which "the blue supernova blob is nearer the middle than the corners" will prompt your claim of a coincidence. (The actual "it's a coincidence" parameter moves the blue blob back and forth the black "flat-space" line. You mention that LCDM has an extra fit parameter---and indeed it does, constraining the blob to lie near the line rather than far from it---apparently confirming the predictions of inflation.)

Also, I repeat, given that we know that the Universe isn't devoid of matter, what makes you think a zero-deceleration, Omega_M=0 model is parsimonious? Do you have a hypothesis telling us to turn gravitational attraction off? We know galaxies are massive, right?

The data set of disk galaxies, discussed in our published paper, and the data set of elliptical galaxies, taken from others work and used in this presentation, both show no change in surface brightness with distance. So the simple, no-parameter prediction of the non-expanding hypothesis is confirmed with these two data sets.

Discussed in your published paper which arbitrarily assumes it can treat galaxies as standard candles. Serious file-drawer effect here, Eric: if your analysis had concluded that there was surface brightness evolution, you'd have say "oops, I guess those weren't good candles". For all we know you did that with a bunch of different datasets.

So, to fit these two data sets, the non-expanding hypothesis takes no free parameters,

a) Your decision to turn off gravity (or set Omega-M=0) is a parameter choice, Eric.

b) These are not "two data sets", they are two different candles measuring a single expansion history. They are degenerate in the Bayesian sense.

c) There are dozens of astrophysical systems which in principle can test cosmological theories. You chose one, i.e. the late-time redshift-distance relation. In stats this raises the issue of "p-hacking". If you have dozens of tests to choose from, it's easy to find one that happens to sort-of-match. Saying "I found a test where my theory is only ruled out at 99%, could be worse, so nyah nyah" is not particularly surprising, and does not make me excited about your theory.
 
Regarding the "anomalous" large-scale structures, the Clowes 2013 discovery was in the news at the time and is not terribly convincing. Accidental structure of this type is indeed created by statistically-homogenous data all the time. http://arxiv.org/abs/1306.1700 does the analysis:

I show that the algorithm used to identify the Huge-LQG regularly
finds even larger clusters of points, extending over Gpc scales, in explicitly homogeneous
simulations of a Poisson point process with the same density as the quasar catalogue.

The 7Li abundance deficit is very, very well known and is in my mind the "most serious" problem with LCDM cosmology. CMB alignments are still very much up in the air. I read the Portinari paper and, wow, that is an incredibly roundabout way of estimating helium abundances, and exactly no one seems to think it's telling us anything about the cosmological He abundance (which is measured well elsewhere)---where did you fish that up? Don't tell me you just pulled out Figure 5 and called it a "helium abundance measurement" or something?
 
Impressive amount of work there ben m. Thanks for doing that for the education of all of us without an agenda. It is important that we (OK, you, in this case) don't let pseudo-science and innumeracy turn the world of science into some sort of idiocracy by letting the crackpots have free rein on the internet. You'll have no effect on the current batch of time-wasters, of course, but there must be hope that you'll deter some of the waverers from joining their ranks.
 
Can one of you explain to everyone how a model (expanding-universe) that requires four ad-hoc parameters to fit these data sets is superior to a model (non-expanding universe) that requires no free parameters to fit the same data? Science is about prediction. Any set of data can be fit by any model post-hoc, with enough free variables. To be useful, a scientific theory must be able to predict data ahead of time, not just fit the data afterwards with an every-expanding list of free parameters.

Sandage’s papers , which are addressed in our paper, set up a tired light theory with a wrong relationship of redshift to distance, not a linear one. As we show, with the right, (linear), relationship, his data is a good fit to our non-expanding model—with no free parameters.

How does a linear relationship make sense? As I understand it, a redshift z corresponds to a frequency ratio of 1/(1+z). So if at distance d, z=1 and at distance 2d, z=2 (in a static model, so this relationship is presumably not supposed to change with time), what happens to a photon emitted from a galaxy at distance 2d on its way here? If it starts with frequency f, wouldn't it have frequency f/(1+1)=f/2 after distance d (half way) and so f/4 when it arrives, corresponding to z=3 rather than z=4?

Maybe I'm missing something but it seems that the concordance model is internally consistent and yours isn't.
 
Also you have to answer questions not dance around them.
Exactly. I suggested a similar policy here.

I think all of this nonsense has been dealt with by Brian Koberlein in the links I posted earlier. Unless anybody else has some new evidence, it's kind of dead in the water.
That won't stop people putting a few volts through it and causing a few twitches.
 
JeanTate said:
I will try to find the actual papers to which the refs in the presentation seem to refer to.
Here's what I found:

<snip>

Here are the references I noted, not including the SB test (for which the main reference is your recent paper, Eric L):

Li declines with Fe, < 0.03 BBN prediction: Sbordone+ (2012), Hansen+ (2015)
Sbordone+ (2012): "Lithium abundances in extremely metal-poor turn-off stars" The figure in Eric L's presentation seems to be ~the same as Figure 4 in this paper.

Hansen+ (2015): "An Elemental Assay of Very, Extremely, and Ultra-metal-poor Stars"

He also far too low in local stars: Portinari, Casagrande, Flynn (2010)
Portinari, Casagrande, Flynn (2010): "Revisiting ΔY/ΔZ from multiple main sequences in globular clusters: insight from nearby stars"

LCDM predicts 3x too much DM: I.D. Karachentsev, Astrophs. Bull. 67, 123-134
Karachentsev (2012): "Missing dark matter in the local universe"

>200 Mpc LSS takes far too long to form for BB: Clowes+ (2012)
Clowes+ (2012): "Two close large quasar groups of size ˜350 Mpc at z ˜1.2"

There's also Clowes+ (2013), which ben m cited: "A structure in the early Universe at z ˜1.3 that exceeds the homogeneity scale of the R-W concordance cosmology"

CBR alignments (etc): (no refs)

Evidence indicates scattering/abs of RF radiation in local universe: Atrophys & SS, 1993
Nothing for the former, obviously.

"Atrophys & SS" seems to be a typo; perhaps Eric L meant "Astrophysics and Space Science", the journal?

If so, then there were 12 volumes published in 1993, from 199 to 210. I do not intend to find out which paper, or papers, Eric L is referring to.

Free Parameters exceed measurements: Disney? (voiceover, not slide)
I did not try to track this down.

<snip>

Have I copied the references correctly, Eric L?
Have I correctly identified the references, Eric L?
 
Can one of you explain to everyone how a model (expanding-universe) that requires four ad-hoc parameters to fit these data sets is superior to a model (non-expanding universe) that requires no free parameters to fit the same data? Science is about prediction. Any set of data can be fit by any model post-hoc, with enough free variables. To be useful, a scientific theory must be able to predict data ahead of time, not just fit the data afterwards with an every-expanding list of free parameters.

This exhibits an incredibly poor understanding of physics and deduction. If the actual stars/galaxies in the Universe is observed to be doing X right now, we should be able to say why. What initial conditions Y, evolving under what laws of physics Z, can lead to X being observed?

If it so happens, by good luck, that your first guess was right---"Oh, hey, if I plug in a boring and obvious Y and the simplest possible approximation of Z, I predict X exactly"---great. If Y+Z-->X doesn't work on the first try, your next job is necessarily parameterized hypothesizing. That is how we discover new things---by following up on theory/experiment disagreements and hypothesizing something you hadn't previously thought of or known about might

Please note that the Hubble curve was not fit with free shape-determining parameters. We did not take a curve that "should have been straight", notice that straight line was a poor fit, and throw in unmotivated quadratic and quartic terms. (That is the only case where your parameter-counting argument is meaningful.) The Supernova Cosmology Project specifically set out to measure Omega_L and Omega_M, which we knew were physically-meaningful quantities; and whose value we didn't have some magic way of guessing.

Anyone who claims to "know" Omega_M without trying to measure it is delusional. Of course we try to measure it! Of course we hypothesize that it could take any value, and compare these hypotheses with the data! How the heck else are we supposed to know it?

Please note that this parameter-counting argument only sounds sensible for a millisecond because of your choice to argue about the Hubble curve as an isolated mystery-function, fit only with mystery-parameters invented out of nowhere, to which we later ascribed dark names and mystery meanings. That's just not true. They're physically motivated quantities and we've overconstrained the heck out of them via different effects on different gravitating systems at different redshifts.

(An analogy: The Earth's curvature can be measured by pointing out how you can stand on the ground, watch the sun set, then quickly ascend a tall building and see it set again. You draw a diagram and show the sunset-time-vs-height curve looks for different possible Earth radii. "I don't like how you invented a whole new "radius" parameter explain these tiny time differences. Of course adding arbitrary parameters improves the fit. Flat-Earth theory has zero parameters.")

So, no, I am absolutely not impressed by the fact the small number of parameters in your fit. Your simple theory describes a straight line, the data is not quite a straight line, so we're forced to look for other physical phenomena (either different initial conditions or different laws to govern the evolution) that explain why not.
 
Last edited:
I did not try to track this down.

Oh, wait, I recognize that one. It's got to be Mike Disney, "The Case Against Cosmology", http://arxiv.org/abs/astro-ph/0009020

Disney, speaking in 2000 very shortly after the supernova data releases (it's a very informal paper, I think it's a talk writeup) was frustrated with people treating cosmology as known and full constrained. Here is his intro: (bolding mine)

As an example of this
triumphalist approach consider the following conclusion from Hu et al. [1] to a
preview of the results they expect from spacecraft such as MAP and PLANCK
designed to map the Cosmic Background Radiations: “. . . we will establish the
cosmological model as securely as the Standard Model of elementary particles.
We will then know as much, or even more, about the early Universe and its
contents as we do about the fundamental constituents of matter”.
We believe the most charitable thing that can be said of such statements is
that they are naive in the extreme and betray a complete lack of understanding
of history, of the huge difference between an observational and an experimental
science, and of the peculiar limitations of cosmology as a scientific discipline. By
building up expectations that cannot be realised, such statements do a disservice
not only to astronomy and to particle physics but they could ultimately do harm
to the wider respect in which the whole scientific approach is held. As such,
they must not go unchallenged.

You know what MAP (later called WMAP) and Planck saw in their data? Spectacularly-detailed agreement with LCDM. The expectation was precisely realized; as it turned out, cosmologists using the LCDM initial conditions had made extraordinarily precise predictions of CMB features. Disney has not updated his criticisms in any way, and neither has anyone else. That was 15 years ago, in a rapidly changing field. He was advocating for caution, and people were basically being cautious already, and none of his concerns were borne out by the data.

There is no parameter-counting argument against LCDM cosmology. It's terrifically overconstrained.
 
Last edited:
ETA: Your irrelevant paper is also a tired light model. You evoke a unspecified mechanism to replicate Hubble's Law. That is what tired light does.
Another point about your paper containing no evidence against concordance cosmology, Eric L: It contains no evidence for a realistic cosmology :eye-poppi!
This is because of the selection of a static Euclidean model. One thing we know about the universe that we live in is that it is not Euclidean. So what the paper tests is a toy cosmological model, maybe selected to make calculations easier.
We might argue that a Euclidean model is physical because analyzing WMAP and Planck data gives that the universe is probably flat. But that analysis uses the concordance model.
 
Just curious RC. How do you know that?
I'm not RC, but it's an easy question to answer.

First, some key assumptions:
  • the 'laws of physics' are universal
  • General Relativity is one such 'law of physics'
  • the universe contains mass

With these three, it follows - inevitably - that the universe that we live is not Euclidean. Would you like me (or someone) to walk you through how the 'non-Euclidean' conclusion follows from the three assumptions, hecd2?

Now there's not much we can do about the first assumption (the 'laws of physics' are universal): short of sending fully-equipped physics labs to every point in the universe, to confirm, empirically, that this assumption is valid, how could it be robustly tested?

The second (General Relativity is one such 'law of physics') need not be rigorously true; it only needs to be as good a description of what we 'see' as any alternative (well, it's a bit more complicated than that, but that'll do as a shorthand).

The third (the universe contains mass) is obviously true.
 

Back
Top Bottom