Evidence against concordance cosmology

In Section 2 ("The adopted cosmology"):

Lerner+ (2014) said:
Since the SB of galaxies is strongly correlated with the intrinsic luminosity, for a correct implementation of the Tolman test it is necessary to select samples of galaxies at different redshifts from populations that have on average the same intrinsic luminosity.

No reference given, and I for one would really like to see evidence for "the SB of galaxies is strongly correlated with the intrinsic luminosity"! Presumably, the authors are referring to intrinsic luminosity in the UV/optical/NIR, but they don't say so. Ditto SB.

There's also a potential problem re "average" ... it depends on which average is used, and what the SB-intrinsic luminosity relationship is.

It should be noted that this cosmological model is not the Einstein-De Sitter static Universe often used in literature.

I guess one has to take the authors' word for it; again, it would be nice if there were some references.

The choice of a linear relation is motivated by the fact that the flux-luminosity relation derived from this assumption is remarkably similar numerically to the one found in the concordance cosmology, the distance modulus being virtually the same in both cosmologies for all relevant redshifts. This is shown in Fig. 1 where the two relations are compared to each other [...]

Here's the caption for Figure1:

Comparison of the distance modulus for Vega magnitudes for the adopted Euclidean non-expanding universe with linear Hubble relation cosmology and the concordance cosmology. Upper panel: The distance modulus (m–M) = 25+5Log(cz/Ho)+2.5Log(1+z), where H0 = 70 in km s−1 Mpc−1 as a function of the redshift z for an Euclidean Universe with d= cz/H0 (black line) compared to the one obtained from the concordance cosmology with Ωm = 0.26 and ΩΛ = 0.76 (red line). Middle panel: Ratio of the two distances (concordance/Euclidean). Lower panel: Distance modulus difference in magnitudes(concordance-Euclidean). This graph shows clearly the similarity of the two, making galaxy selection in luminosity model-independent.

The last phrase, "making galaxy selection in luminosity model-independent", is clearly not true, as the figure itself (and the text) shows.

and, in Fig. 2, to supernovae type Ia data. Up to redshift 7, the apparent magnitude predicted by the simple linear Hubble relation in a Static Euclidean Universe (SEU) is within 0.3 magnitude of the concordance cosmology prediction with ΩM = 0.26 and ΩΛ = 0.74. The fit to the actual supernovae data is statistically indistinguishable between the two formulae.

I find the inconsistency in terms - Ωm (Fig 1) vs ΩM (ref to Fig 2) to be not only annoying, but possibly indicative of sloppiness in the research itself. Combined with the - to me - amazing lack of references, it strongly suggests both lax standards re the research, and poor peer-review. The latter is perhaps not surprising, as the reviewers chosen by the Editor of "International Journal of Modern Physics D" were likely not experienced astronomers.

However, the last sentence - "The fit to the actual supernovae data is statistically indistinguishable between the two formulae" - is just too much ... if it's true, then a 'goodness of fit' statistic should have been quoted. Myself, I find it very hard to accept that it's true (more later).
 
So, I guess I was right in thinking that you are having a hard time putting this down. Is that a fair summation?
 
So I've started reading Lerner+ (2014), and have some questions. ...
The problem with discussing Lerner+ (2014) in this thread is that the paper contains no evidence against concordance cosmology.
It is a invalid paper supporting a toy model of the universe (a static Euclidean universe) by assuming a tired light explanation for cosmological redshift. So tere is no pint in extensive analysis of the paper. We should wait for Eric L to provide valid evidence against concordance cosmology.
 
Last edited:
So, I guess I was right in thinking that you are having a hard time putting this down. Is that a fair summation?

Having read all 91 (!) pages of the Plasma Cosmology - Woo or not thread (started by RC!), I'm hoping that, with Eric L's participation, we will at last have a deep discussion of the topic (plasma cosmology, or PC for short), here in ISF.

As far as I can tell, there has been no such discussion, anywhere on the internet; at least, not one where there is a proponent of PC that is head and shoulders above all the others I've seen (of course, it's entirely possible that I may not have come across a discussion of PC where there is a proponent as competent, in physics at least, as Eric L).

As Lerner+ (2014) is peer-reviewed, I think I can learn a lot about the process of getting scientific papers published, not least to learn what gets through such a review, in terms of the sorts of things I'd expect to picked up on (but weren't). For example, as I've already mentioned, I'm quite surprised that whoever reviewed Lerner+ (2014) did not insist on more references, to back up the bald statement. Too, I find it amazing that there was no call for the authors to quote a 'goodness of fit' statistic.
 
One final thing, since this is fun, then I'm checking out for a while.

I tested the best tired-light model I could come up with. I now realize why that's so different than Eric's model. He insists on:


d= cz/H0


And calls it a static-universe tired-light theory. On top of the (unjustified) idea of tired light to begin with, Eric has chosen a totally unphysical implementation of it.

Consider sources at d = c/H and d=2c/H. Lerner tells us that these will be detected with redshifts of z=1 and z=2 respectively. A photon from a z=1 source has only half of its original energy (or has doubled in wavelength). A photon from a z=2 source has only 1/3rd of its original energy left (or has tripled in wavelength). Tired Light theory attributes this to some property of space that saps energy from light passing through.

Here's the especially broken thing about Lerner's version. I will set c/H = 1 for simpler typing.

Emit an E=8 eV photon at d=1, which Lerner says is z=1. When it gets to d=0 it has E= 4 eV.

Emit an E= 12 eV photon at d=2, which Lerner says is z=2. When it gets to d=0 it has E=4 eV.

But a photon from d=2 has to pass by d=1 on the way to d=0. The two halves of its journey are, according to Eric's curve, very different.


Are we supposed to conclude that the z=2 photon went from 12 eV to 8 eV (a factor of 0.66) in the *first* half of its journey, then went from 8 eV to 4 eV (a factor of 0.5) in the second half? Because that's what Lerner's equation says. Surely the path between d=1 and d=0 has the same effect on (a) a photon that travels that path only, vs (b) photons arriving from further away. (If not, Lerner's theory is even weirder.) says that the tired-light-ness of distant space is less effective than the tired-light-ness of nearby space. He wrote something that looks simple on paper (d = cz/H) but which implies ridiculous physical complication---including putting the Earth at the geometric center of a set of spherical shells of different tired-light effects, all stacked and fine-tuned in some weird way to make Lerner's equation look linear in conventional notation.

If you insist on attributing tired-light properties to space, the only remotely parsimonious thing to write is, say,

1/(z+1) = e^(-d/d0)

with some scale factor d0. That's a well-behaved, one-parameter, "local" theory of energy-loss-while-traversing-a-medium. (And that's the theory I tested. It disagrees horribly with the supernova data. Oh well.)

Lerner's theory is a tired-light, static, heliocentric theory, with an unknown number of free parameters tossed in (and their values chosen) solely to yield a linear d-z relation. Contra Lerner, a linear d-z is not some simple minimal theory that we should default to except under strong compulsion---a linear d-z is a bizarre trainwreck of two variables that really don't want to be related linearly.
 
Yes. I was also hoping for an answer to this question ....

How does a linear relationship make sense? As I understand it, a redshift z corresponds to a frequency ratio of 1/(1+z). So if at distance d, z=1 and at distance 2d, z=2 (in a static model, so this relationship is presumably not supposed to change with time), what happens to a photon emitted from a galaxy at distance 2d on its way here? If it starts with frequency f, wouldn't it have frequency f/(1+1)=f/2 after distance d (half way) and so f/4 when it arrives, corresponding to z=3 rather than z=4?

Maybe I'm missing something but it seems that the concordance model is internally consistent and yours isn't.
 
Yes, thank you to Ben M for the explanation (debunking). Now I can sleep. Also seems that someone I've never heard of within scientific literature, who is parroting this thread on a Saturnist website (when he's not using christian ones), has gone suddenly quiet.
 
Eric, any word? If you've dropped out of the thread, I have no reason to keep nitpicking.

I was thinking about the exercise you do with the Tolman test---you apparently thought about "what actual galaxy size evolution path would explain this", and you thought the resulting formula was too complicated, and you claim to have scored a point against LCDM.

If you're interested, I can try the same exercise with *your* model. Your model does not contain a single constant tired-light attenuation length scale d0; to get your desired d=cz/H linear behavior, you have inserted a tired-light theory with a space- or time-variation (d0(t) or d0(d)) in the already-mysterious attenuation process. If you're interested, I might try to construct a multiparameter fit to d0(d) and see how many parameters I need to describe the data. Of course this is not a watertight argument, but it *is* an argument you rely on heavily yourself so I think it's fair play.

Can you tell me, though, would you prefer that I model a time-varying or a space-varying tired-light-function? In principle, since they're both utterly-unknown phenomena that we're trying to use cosmological data to discover, the standard approach would be to do BOTH and see which parts of either parameter-space are ruled out. But that's an approach you *don't* seem to like, so I thought I would check before wasting time.
 
Hi all, got a little time today. I will answer one basic point on the linear Hubble relation and then move on to the second point relating to Lithium and Helium abundances.

Ben m, you really need to brush up a bit on math. Of course it is possible to rewrite a linear Hubble relation in a single-parameter differential form. Here it is:

(dE/dt)/E = frequency ((initial wavelength) /Hubble length)

where E is the energy of the photon at any time, “frequency” is its frequency at any time , “initial wavelength” is the photon’s wavelength when it was emitted and “Hubble length”, the sole parameter, is c/H where c is the speed of light and H is the Hubble parameter.

To put this into words, in the time it takes to travel one wavelength, a photon loses a fraction of its energy equal to the ratio of its initial wavelength divided by the Hubble length.

If we change the hypothesis to say “current wavelength” rather than “initial wavelength” we then get a logarithmic relation between z and d rather than a linear one.

This is NOT a physical mechanism, this is just rewriting the relationship mathematically.

Of course the linear relation is NOT a logarithmic relation, so the photon traveling twice the distance does not lose twice the energy. As its own frequency decreases, the proportional rate of energy loss slows down. But it is still all described by a simple on-parameter equation—the one written here.
 
On the light elements data, a couple of points. The data I cite on helium abundances is derived from observations of nearby stars in a our immediate neighborhood. The data is very high quality as a result. To get helium abundances, one has to use models of stellar structure, but these models have been very well verified and, unlike cosmology theory are based on physical theories that are extremely well tested in the laboratory. We know experimentally how fusion reactions work in detail, we know about thermodynamics, radiation transfer etc. Someone wrote we don’t have stars in the laboratory. But we have all the physics that goes into understanding stars. No dark matter needed. When we learn new physics from stars—like fusion itself 80 years ago and about neutrinos more recently—we can test that in the lab as well.

So the fact that helium abundance trends down towards zero with decreasing abundance of heavy elements is a flat contradiction to Big Bang nuclear synthesis theory, to any Big Bang theory that assumes a hot dense phase for the expansion of the universe.

How does this cohere with observations of interstellar gas in other galaxies? We are seeing galaxies after they have completed their initial formation process. When galaxies are forming they modestly cloak themselves in lots of dust that absorbs most visible and UV light and allows us to see them mainly in IR. So, at the moment , we can’t directly see very-low-He clouds. But we can see stars in our own galaxy that are relics of that early period , We know that because they have so little iron and other heavy elements produced by earlier stars. They are “pristine” and they don’t have the helium BBN predicts.

If on the other hand, there was no BB and all the elements, including helium, that we observe are built up by thermonuclear process in stars, then we expect that as we look at older and older starts in our own galaxy we see less and less helium and lithium. That is what I predicted in published results 30 years ago and that is what is observed.
 
references

People have asked for specific older references of mine, so here’s bunch. Nearly all of them are available on either bigbangneverhappened.org or lppfusion.com.

Do Local Analogs of Lyman Break Galaxies Exist? R. Scarpa, R. Falomo, and E. Lerner
The Astrophysical Journal. Volume 668, Issue 1, Page 74–80, Oct 2007 http://arxiv.org/abs/0706.2948

Evidence for a Non-Expanding Universe: Surface Brightness Data From HUDF
Proceedings of the First Crisis in Cosmology Conference, AIP proceedings series 822, p.60-74. 2006 http://arxiv.org/abs/astro-ph/0509611


Two World Systems Revisited: A Comparison of Plasma Cosmology and the Big Bang, IEEE Trans. On Plasma Sci. 31, p.1268-1275, 2003

"Intergalactic Radio Absorption and the COBE Data", Astrophysics and Space Science, Vol.227, May, 1995, p.61-81

"On the Problem of Big Bang Nucleosynthesis", Astrophysics and Space Science, Vol.227, May, 1995 p.145-149

"The Case Against the Big Bang" in Progress in New Cosmologies, Halton C. Arp et al, eds., Plenum Press (New York), 1993

"Confirmation of Radio Absorption by the Intergalactic Medium", Astrophysics and Space Science, Vol 207,1993 p.17-26.

"Force-Free Magnetic Filaments and the Cosmic Background Radiation", IEEE Transactions on Plasma Science, Vol.20, no. 6, Dec. 1992, pp. 935-938.

"Radio Absorption by the Intergalactic Medium," The Astrophysical Journal, Vol. 361, Sept. 20, 1990, pp. 63 68.

"Galactic Model of Element Formation," IEEE Transactions on Plasma Science, Vol. 17, No. 3, April 1989, pp. 259 263.

"Plasma Model of the Microwave Background," Laser and Particle Beams, Vol. 6, (1988), pp. 456 469.

"Magnetic Vortex Filaments, Universal Invariants and the Fundamental Constants," IEEE Transactions on Plasma Science, Special Issue on Cosmic Plasma, Vol. PS 14, No. 6, Dec. 1986, pp. 690 702.

"Magnetic Self Compression in Laboratory Plasma, Quasars and Radio Galaxies," Laser and Particle Beams, Vol. 4, Pt. 2, (1986), pp. 193 222.
 
Hi all, got a little time today. I will answer one basic point on the linear Hubble relation and then move on to the second point relating to Lithium and Helium abundances.

Ben m, you really need to brush up a bit on math. Of course it is possible to rewrite a linear Hubble relation in a single-parameter differential form. Here it is:

(dE/dt)/E = frequency ((initial wavelength) /Hubble length)

where E is the energy of the photon at any time, “frequency” is its frequency at any time , “initial wavelength” is the photon’s wavelength when it was emitted and “Hubble length”, the sole parameter, is c/H where c is the speed of light and H is the Hubble parameter.

To put this into words, in the time it takes to travel one wavelength, a photon loses a fraction of its energy equal to the ratio of its initial wavelength divided by the Hubble length.

If we change the hypothesis to say “current wavelength” rather than “initial wavelength” we then get a logarithmic relation between z and d rather than a linear one.

If photons with a higher initial frequency lose proportionally more energy than lower-initial-frequency photons, then the spectra of distant galaxies (supernovae, etc) should be compressed. Is this consistent with observation?
 
One more brief comment in reply to Jean Tate and others. The referees did not ask for specific statistical results on the fit to the SN Ia data because it was obvious by eye that the mathematical difference between the two lines was small compared with the scatter in the data. My colleagues have shown this around a lot and no one asks what the fit is. They are all just surprised at the “coincidence” that the LCDM prediction is so close to a straight line. Once you see how close the predictions are, it is obvious they will both fit the same data.

And yes, Jean, astronomical terminology is a mess. Magnitudes, Vega, AB, ST all these systems are historical relics (despite the reference to the HST) and it is too bad astronomers are so fond of tradition. IMHO as a physicist it would be better to use physical units (SI and cgs) only and throw out magnitudes altogether but that will not happen. The US still uses British imperial units, so go figure.
 
Eric L: Why are all of the published calculations of He abundance wrong

On the light elements data, a couple of points. The data I cite on helium abundances is derived from observations of nearby stars in a our immediate neighborhood.
Which is the point you are missing , Eric L.
The paper you cite states in the introduction that BBN produces a universal primordial He mass faction of ~0.24.
Fig 5 is the metal versus He mass fraction observed from the surface of local dwarf stars fitted in 2 ways.

Ignorance about how primordial He abundance is calculated seems to be repeated, Eric L.
More clearly: Lerner provides no citations to literature saying that calculations of primordial He abundance in the Big Bang are wrong or that observations contradict the calculations. All we have is his unsupported assertion that the abundance of He in local stars (as in PCF10) can be used to predict the primordial abundance and is "far too low".

Lerner needs to learn how primordial He abundance is actually derived from observations. For example, they look at hot HII regions in dwarf galaxies to eliminate the need to model He abundance in stars.
Helium-4 and Dwarf Galaxies

The BB model prediction is 25%.
Eric L: Why are all of the published calculations of primordial He abundance wrong?
 
People have asked for specific older references of mine, so here’s bunch.
Try to read your OP and title, Eric L: This is the Evidence against concordance cosmology thread.
Plasma cosmology is not concordance cosmology :eek:!
Plasma cosmology does not really exist. It is a collection of sometimes invalid, sometimes contradictory theories made by people with the unscientific assumption that the BB must be wrong.
Conference proceedings are not scientific literature, especially when they are an irrelevant early version of a non-physical toy model being tested.
 
If photons with a higher initial frequency lose proportionally more energy than lower-initial-frequency photons, then the spectra of distant galaxies (supernovae, etc) should be compressed. Is this consistent with observation?

No, the product of wavelength and frequency for an emitted photon is a constant--c.
 
This is NOT a physical mechanism, this is just rewriting the relationship mathematically.

Thanks Eric.

  • Are we *ever* allowed to try to write a physical model for this idea? And to propose methods for testing it other than sort-of-fitting straight lines through Hubble diagrams? Because right now I can think of lots of problems with your suggested model. (I mean, seriously, it's a godawful theory from the point of view of E&M, particle physics, and relativity) and is trivially disproven by the existence of emitting and absorbing systems at different redshifts.) If I try to post such criticism, I suspect you'd say "no, no, you're not supposed to criticize the model itself yet, let's start by showing that it's a good fit and counting the parameters". To which I say, preemptively:
    • Your ability to invent the model is the only thing that allows us to count the parameters at all. (Nobody cares how many parameters there are in the best-vague-polynomial-drawn-through-something.)
    • The existence of multiple models---"one tired light model gives d=z, another gives d = log(1+z), another gives ..."---is, by definition, a free parameter in modeling; you chose to emphasize the d=z version over the d=log(1+z) version or the d=z+z^2/2 version because you think it agrees with the data, not for any other reason. But you forget (or pretend) that you made this choice when it comes time to talk about the number of parameters in your theory. (Recall that if the data had come in looking like d = log(1+z), you'd be talking about the one-parameter nature of your fit to THAT, and poking fun at any cosmologist who ran hypothesis-tests on a dark-matter-inspired d=z model.)
    • You're basically saying "we should describe the data simply and accurately first, and worry about the physical nature of the best-fit model later" here, which is exactly the behavior that invites your scorn. We have a great, predictive cosmological model, Eric, which describes the data simply and accurately. It's called "LCDM". It includes three ingredients (something that looks like dark matter, something that looks like dark energy, and an inflaton mechanism that kicks in at high energy) whose detailed physical nature we don't know ... and don't need to know for further model computations, although it'd be nice to find out. The resulting fits are superb, and theorists tell us that the new-physics-inferred is not implausible or otherwise ruled out.
 
Analysis that shows how Scarpa et. al. (2007) was wrong

Do Local Analogs of Lyman Break Galaxies Exist? R. Scarpa, R. Falomo, and E. Lerner
The Astrophysical Journal. Volume 668, Issue 1, Page 74–80, Oct 2007 http://arxiv.org/abs/0706.2948.
Eric L: You need to not cite a paper ant ignore the paper that debunks it:
HST morphologies of local Lyman break galaxy analogs I: Evidence for starbursts triggered by merging
Roderik A. Overzier, Timothy M. Heckman, Guinevere Kauffmann, Mark Seibert, R. Michael Rich, Antara Basu-Zych, Jennifer Lotz, Alessandra Aloisi, Stephane Charlot, Charles Hoopes, D. Christopher Martin, David Schiminovich, Barry Madore
See the Appendix for a detailed analysis that demonstrates that the interpretation and conclusions presented in the recent paper by Scarpa et al. (2007) are in fact in error.
This happens to be the only credible paper that cites yours (one preprint and a single author paper).
There are 88 papers about local analogs of Lyman break galaxies published after yours.
 
Last edited:
"Intergalactic Radio Absorption and the COBE Data", Astrophysics and Space Science, Vol.227, May, 1995, p.61-81
Sorry, Eric L, but this is a really bad citation for many reasons.
  1. The COBE data did describe a "perfect" black body spectrum.
    The FIRAS measurements have error bars that are typically hidden by a Planck's Law curve.
    Mather et al. 1994, Astrophysical Journal, 420, 439, "Measurement of the Cosmic Microwave Background Spectrum by the COBE FIRAS Instrument, "Wright et al. 1994, Astrophysical Journal, 420, 450,"Interpretation of the COBE FIRAS CMBR Spectrum," and Fixsen et al. 1996, Astrophysical Journal, 473, 576,"The Cosmic Microwave Background Spectrum from the Full COBE FIRAS Data Sets"
  2. Figure 2 is dubious. You are showing that your model cannot fit the COBE data (crosses) because you have not bothered to add error bars!
  3. Stars are not perfect black bodies. Galaxies are not prefect back body emitters. The IGM is not a perfect black body. A idea about invisible, never detected "electrons trapped in dense magnetically pinched filaments in the IGM" is a fantasy.
  4. An obscure single author paper (cited 4 times!).
  5. Where are the follow up papers for the WMAP and Planck data?
    Citing an author who seems to have abandoned his own idea is bad.
  6. Where are the calculations of the power spectrum produced by these filaments?
  7. Where are the calculations of the temperature of the CMB with distance from us?
 
Last edited:

Back
Top Bottom