Evidence against concordance cosmology

I can do a bit of Eric's job for him again. I know a bit about the CMB alignments. There's a survey of them here (including things I've heard of and things new to me.)

http://arxiv.org/abs/1510.07929

(Note that the paper includes an overview, which I encourage Lerner not to skip, of the many, many CMB statistics that have *tested and confirmed* LCDM predictions.) But it's true: among the hundreds of aspects of the CMB that precisely match the basic LCDM hypothesis, there are a few that appear somewhat unlikely (at the 2% to 0.1% level) to have arisen in a random sampling of LCDM primordial fluctuations. The paper overviews ideas for new hypotheses where the initial conditions are different.

Note that non-LCDM cosmologies (plasma, steady-state, etc.) have not yet come up with a proposal in which a roughly uniform blackbody background exists at all, much less one where this background has isentropic fluctuations at the 10^-5 level, much less one where the fluctuation angular power shows a damped acoustic-wave-like spectrum, much less etc. etc. etc..

Some of the authors of the above are also on this paper:

http://arxiv.org/abs/1512.05356

which is like the bizarro-world version of the Crisis In Cosmology Conference---this is what it sounds like when people who actually understand LCDM look for opportunities to break it. Interesting reading.
 
One thing I am struggling to understand, with Lerner's "tired light" cosmology:

We know that light from distant, redshifted supernovae is spread out over time. A supernova at z=0 with a monthlong outburst? If you see a spectrally-identical event at z=1, it has a two-month-long outburst.

(In fact, they're observed to spread out over time by exactly the same time stretch that appears in individual photon frequencies. In LCDM that is an obvious predicted effect.)

If I want to write down a tired-light model, I understand the tired-light energy loss on an individual photon level: "The source emitted a 6 eV photon but it only had 3 eV left when it arrives, so the detected energy is E0/(1+z)". (ETA: what I mean, is I understand the fact that this is what I'm supposed to be modeling. I do not understand that as the plausible outcome of some new microphysics.) But if I'm looking at a whole supernova---well, let's compare.

In LCDM, identical supernovae have identical emitted-photon counts. If there's an event emitting N photons, each at energy E0 at a distance D and redshift z, I detect N/D^2 of those photons but they're each at E0/(1+z).

In tired-light, I'm not sure. It is an observational fact that high-z supernova photons are detected for a longer period of time.

a) Do you think that's a real effect ("those really were longer-lasting supernovae by a factor of (1+z)", or more generally "longer by some factor we can parameterize in z, for which 1+z is the best fit")? In that case, the total emitted photon count was N*(1+z), the photon energies are E0/(1+z). The supernova's bolometric luminosity is then N/D^2 E0/(1+z), but its bolometric fluence is N/D^2. (Note that a non-bursty standard candle just has luminosity N/D^2 E0/(1+z) in this case, magically matching the supernova.)

b) Do you think that "new tired-light physics" puts variable time-delays on the photons? So the photons are emitted over time T, but have their arrivals spread out to T*(1+z)? (Or, sorry, "have their arrivals spread out by some unknown z-dependent factor we'll have to fit to the data") In that case the bolometric luminosity is N/D^2 E0/(1+z)^2 while the bolometric fluence is N/D^2 E0/(1+z). Note that a time-independent candle isn't dimmed by mere timeshifting, so its bolometric luminosity is N/D^2 E0/(1+z) with just the photon-energy-loss correction.

c) If we're putting in a time-shift effect in empty space (as in answer b) surely this is experienced independently photon-by-photon. An individual photon has no way of knowing whether it's from the early edge of a supernova or the late edge. An actual "dT --> dT(1+z)" factor sounds highly implausible in this sense; surely any real photon-delays-in-empty-space would be a time-smearing (with *sigma* proportional to z, maybe) rather than a coherent stretch. Please show how a tired-light model's time smearing is parameterized, and show whether you've found any parameter choices that agree with the data.

(Maybe you don't want to do this yet, but we *have* to decide how the photon-arrival-time stretching works, because until you know that you don't know how to treat supernovae as standard candles.)

d) If we're allowed to invent new physics allowing perfectly-achromatic, perfectly-collinear "redshifting" at all frequencies, surely we have lost some confidence in any of our other priors about photon propagation through free space. Please document the set of different assumptions, hypotheses, and constraints that you've considered for the "tired light" behavior itself.

:popcorn1
 
So, tired light..........anybody think that this hasn't been debunked years ago? I am really struggling to see the point of this. Nobody takes it seriously. Why should we start again now?
Is there some new evidence that I am not aware of? Or are we just going around in the same unevidenced PC circles again?
I linked to Brian Koberlein's rather dismissive articles on this nonsense, way back in the thread. Any chance that those objections have actually been overcome by the tired light brigade? Any chance they could post some evidence?
Frankly, it's just nonsense. Hence why nobody takes it seriously. Very silly.
 
I can do a bit of Eric's job for him again. I know a bit about the CMB alignments. There's a survey of them here (including things I've heard of and things new to me.)

http://arxiv.org/abs/1510.07929

(Note that the paper includes an overview, which I encourage Lerner not to skip, of the many, many CMB statistics that have *tested and confirmed* LCDM predictions.) But it's true: among the hundreds of aspects of the CMB that precisely match the basic LCDM hypothesis, there are a few that appear somewhat unlikely (at the 2% to 0.1% level) to have arisen in a random sampling of LCDM primordial fluctuations. The paper overviews ideas for new hypotheses where the initial conditions are different.

Note that non-LCDM cosmologies (plasma, steady-state, etc.) have not yet come up with a proposal in which a roughly uniform blackbody background exists at all, much less one where this background has isentropic fluctuations at the 10^-5 level, much less one where the fluctuation angular power shows a damped acoustic-wave-like spectrum, much less etc. etc. etc..

Some of the authors of the above are also on this paper:

http://arxiv.org/abs/1512.05356

which is like the bizarro-world version of the Crisis In Cosmology Conference---this is what it sounds like when people who actually understand LCDM look for opportunities to break it. Interesting reading.
These are great, thanks for posting them ben m! :thumbsup:

The second one, Bull+ (2015) "Beyond ΛCDM: Problems, solutions, and the road ahead" is particularly interesting. Readers be warned though, it's 97 pages long, has 517 references (!), and assumes the reader is pretty familiar with quite a lot of physics and astronomy.
 
Here's what I found:

<snip>

JeanTate said:
CBR alignments (etc): (no refs)
<snip>

Nothing for the former, obviously.

<snip>
I can do a bit of Eric's job for him again. I know a bit about the CMB alignments. There's a survey of them here (including things I've heard of and things new to me.)

http://arxiv.org/abs/1510.07929

(Note that the paper includes an overview, which I encourage Lerner not to skip, of the many, many CMB statistics that have *tested and confirmed* LCDM predictions.) But it's true: among the hundreds of aspects of the CMB that precisely match the basic LCDM hypothesis, there are a few that appear somewhat unlikely (at the 2% to 0.1% level) to have arisen in a random sampling of LCDM primordial fluctuations. The paper overviews ideas for new hypotheses where the initial conditions are different.

Note that non-LCDM cosmologies (plasma, steady-state, etc.) have not yet come up with a proposal in which a roughly uniform blackbody background exists at all, much less one where this background has isentropic fluctuations at the 10^-5 level, much less one where the fluctuation angular power shows a damped acoustic-wave-like spectrum, much less etc. etc. etc..
The paper ben m cites is Scwarz+ (2015) "CMB Anomalies after Planck". Figure 4 in this seems to be ~the same as that in Eric L's presentation.

The caption for Figure 4 is:
The combined quadrupole-octopole map from the Planck 2013 release [33]. The multipole vectors (v) of the quadrupole (red) and for the octopole (black), as well as their corresponding area vectors (a) are shown. The effect of the correction for the kinetic quadrupole is shown as well, but just for the angular momentum vector ^n2, which moves towards the corresponding octopole angular momentum vector after correction for the understood kinematic effects.

Have I correctly identified the reference, Eric L?
 
So, Eric, a summary of your talk looks like:

a) You took a tired-light theory that you haven't actually written down in any way. One of its predictions is vaguely consistent with your implementation of the Tolman surface brightness test. The error bars are large.

b) You don't know enough about the 20th-century hypothesis-testing machinery to quantify that "consistency" claim. You don't know enough about mainstream structure-formation theory to make a convincing claim of consistency/inconsistency with LCDM, so the best you have is an (invalid) parameter-counting/Occam's Razor argument. You don't know enough about your own tired light hypothesis to perform cross-checks with non-Tolman observables.

c) Your Occam's Razor argument is itself sort of hermetically sealed---you can only state it with a straight face if you pretend that the Hubble curve is the only evidence for LCDM's otherwise-unconstrained new physics, and if you pretend that tired light is not otherwise-unconstrained new physics. Your talk apparently gestures towards the existence of a parameter-counting argument against LCDM, but there is no such argument because LCDM is very highly overconstrained.

d) You also did some ArXiV mining for plots that you thought discredited LCDM, and in several cases you deluded yourself severely about BBN and large-scale-structure issues. This perhaps costs you credibility, don't you think? When someone has cried wolf over and over, saying "X is disproven! X is disproven!", it suggests that they have some sort of kneejerk dislike of the hypothesis, rather than that they're a good objective judge of its merits. Such a person has no credibility to make claims like "X isn't as parsimonious a hypothesis as we'd prefer!"

e) The above is apparently the best anti-LCDM argument available after 30 years of trying to construct plasma-cosmology alternative.

f) Also, some mainstream cosmologists are worried about 7Li abundances and about the possibility that we're seeing an unlikely realization of an LCDM CMB. You're right! We are already on the case on those, Eric. Maybe it's new physics. Maybe it's radical new physics. We'll learn more by constructing hypotheses and testing them against the best and largest datasets---just like we've always done.
 
Sure.

Note that I have not tried to match figures (etc) to what's in the first part of your presentation; I assume it's all in your "UV Tolman" paper (my shorthand).

<snip>

I've now tried; here's what I found:

Lerner+ (2014) "UV surface brightness of galaxies from the local universe to z ~ 5" is behind a paywall. However, there's an arXiv preprint; here's the abstract:

Lerner+ (2014) said:
The Tolman test for surface brightness dimming was originally proposed as a test for the expansion of the Universe. The test, which is independent of the details of the assumed cosmology,is based on comparisons of the surface brightness (SB) of identical objects at different cosmological distances. Claims have been made that the Tolman test provides compelling evidence against a static model for the Universe. In this paper we reconsider this subject by adopting a static Euclidean Universe with a linear Hubble relation at all z (which is not the standard Einstein- de Sitter model),resulting in a relation between flux and luminosity that is virtually indistinguishable from the one used for LCDM models. Based on the analysis of the UV surface brightness of luminous disk galaxies from HUDF and GALEX datasets, reaching from the local Universe to z ~ 5 we show that the surface brightness remains constant as expected in a SEU.
A re-analysis of previously-published data used for the Tolman test at lower redshift, when treated within the same framework, confirms the results of the present analysis by extending our claim to elliptical galaxies. We conclude that available observations of galactic SB are consistent with a static Euclidean model of the Universe.
We do not claim that the consistency of the adopted model with SB data is sufficient by itself to confirm what would be a radical transformation in our understanding of the cosmos. However, we believe this result is more than sufficient reason to examine further this combination of hypotheses.

In Eric L's presentation, "Static vs Expanding SNIa - Both Fit No Difference - amazing coincidence?" seems to be Figure 2 in the paper; here's the caption:

Lerner+ (2014) said:
Figure 2
Superposed to the models are data for supernovae type Ia from the gold sample as defined in Riess et al.8 (pluses), and the supernovae legacy survey9, (crosses). The assumed absolute magnitude of the supernovae is M = -19.25. The two lines (SEU model with solid line and ΛCDM concordance cosmology as dashed line) are nearly identical over the entire redshift range, differing at no point by more than 0.15 mag and in most of the region by less than 0.05 mag.

"Mean SB is Constant" seems to be Figure 4 (though it is B&W in the paper, with squares for the blue circles); here's the caption:

Figure 4. The difference in mean SB (Δμ = μHUDF - μGALEX) between the HUDF and GALEX members of each pair of matched samples plotted against the mean redshift of the HUDF samples (filled circles NUV dataset, filled squares: FUV dataset). Results are consistent with no change in SB with z. Error bars are 1-sigma statistical errors.

"If we include unresolved galaxies, Median SB is constant" seems to be Figure 5; here's the caption:

Figure 5. The difference in median SB (taking into account unresolved galaxies) between the HUDF and GALEX members of each pair of matched samples is plotted against the mean z of the HUDF sample (filled circles NUV dataset, filled squares: FUV dataset). As with the mean SB, results are consistent with no change in SB with z. Error bars are one-sigma statistical errors

"Observational limits do not bias results" seems to be Figure 6; here's the caption:

Figure 6. Log relative frequency of galaxies are plotted against SB for the selected HUDF sample with -17.5<M<-19 (squares) and with -16<M-17 (triangles). The dimmer galaxies on the right show the effect of the limits of SB visibility is significant only for galaxies dimmer than 28.5 mag/arcsec2 which does not affect the distribution of the galaxies in the sample. The sample galaxies on the left show a similar cutoff due to the smallest, highest SB galaxies being unresolved. In both cases the curves are Gaussian fits to the non-cutoff sections of the distributions.

"-16<M-17" seems to be a typo; I think it's meant to be "-16<M<-17".

There are ~five figures in the presentation with the same title, "New: Size - expanding vs non-expanding". None seem to be in the paper.

8 is "Riess A. G., Strolger L. G., Torny J. et al., ApJ 607, (2004) 665" (per ADS, Riess+ (2004) "Type Ia Supernova Discoveries at z > 1 from the Hubble Space Telescope: Evidence for Past Deceleration and Constraints on Dark Energy Evolution")
9 is "Astier P., Guy J, Regnault N, et al., A&A 447, (2006) 31" (per ADS, Astier+ (2006) "The Supernova Legacy Survey: measurement of ΩM, ΩΛ and w from the first year data set")

I am a bit puzzled that no SNIa data after 2006 is included in Figure 2, nor that any statistical tests seem to have been done on the two fits (I have not yet checked the text in the paper, re this Figure).
 
<SNIP>
I am a bit puzzled that no SNIa data after 2006 is included in Figure 2, nor that any statistical tests seem to have been done on the two fits (I have not yet checked the text in the paper, re this Figure).

I assume you may be referring to this (possibly amongst others):
TIME DILATION IN TYPE Ia SUPERNOVA SPECTRA AT HIGH REDSHIFT
Blondin et al, (2008): http://arxiv.org/pdf/0804.3595v1.pdf
Abstract (bolding mine):
"We present multiepoch spectra of 13 high-redshift Type Ia supernovae (SNeIa) drawn from the literature, the ESSENCE and SNLS projects, and our own separate dedicated program on the ESO Very Large Telescope. We use the Supernova Identification (SNID) code of Blondin & Tonry to determine the spectral ages in the supernova rest frame. Comparison with the observed elapsed time yields an apparent aging rate consistent with the 1/(1 +z) factor (where z is the redshift) expected in a homogeneous, isotropic, expanding universe. These measurements thus confirm the expansion hypothesis, while unambiguously excluding models that predict no time dilation, such as Zwicky’s “tired light” hypothesis. We also test for power-law dependencies of the aging rate on redshift. The best-fit exponent for these models is consistent with the expected 1/(1 +z) factor."
 
Last edited:
I
I am a bit puzzled that no SNIa data after 2006 is included in Figure 2, nor that any statistical tests seem to have been done on the two fits (I have not yet checked the text in the paper, re this Figure).

I was initially puzzled by that too. (I mean, the inclusion of this figure in the paper (with almost no accompanying discussion in the text) is bizarre to begin with, but why wouldn't a 2014 paper use, e.g., the Union2.1 dataset?) In terms of redshift coverage, this is not too big a deal, as a large fraction of our highest-redshift supernova catalogue is already in Lerner's catalogue. (On the other hand, our understanding of metallicity/spectra keeps improving, so the later Union catalogue would be expected to have better magnitude measurements.)
 
<snip>
Li declines with Fe, < 0.03 BBN prediction: Sbordone+ (2012), Hansen+ (2015)

Sbordone+ (2012): "Lithium abundances in extremely metal-poor turn-off stars" The figure in Eric L's presentation seems to be ~the same as Figure 4 in this paper.

<snip>
Here is the caption for Figure 4:
Sbordone+ (2012) said:
Fig. 4. Like in Fig. 1 but now including the two ultra metal poor stars HE 1327-2326 and SDSS J102915+172927 (open and filled large circles, respectively). Symbols for the other stars are the same as in Fig 1.

And Figure 1:

Fig. 1. A(Li) vs. [Fe/H] for various samples. Filled black circles, Bonifacio et al. (2012); open black circlessbordone10; green filled triangles Aoki et al. (2009); red open triangles Hosford et al. (2009); magenta stars, the two components of the binary star CS 22876-032 (González Hernández et al. (2008), blue open squares Asplund et al. (2006). Points with a downward arrow indicate upper limits. The grey horizontal line indicates the current estimated primordial Li abundance based on WMAP results.

"circlessbordone10" is a typo; it should be (I think) "circles Sbordone et al. (20100"

Bonifacio+ (2012) "Chemical abundances of distant extremely metal-poor unevolved stars"

Sbordone+ (2010) "The metal-poor end of the Spite plateau. I. Stellar parameters, metallicities, and lithium abundances"

Aoki+ (2009) "Lithium Abundances of Extremely Metal-Poor Turnoff Stars"

Hosford+ (2009) "Lithium abundances of halo dwarfs based on excitation temperature. I. Local thermodynamic equilibrium"

González Hernández+ (2008) "First stars XI. Chemical composition of the extremely metal-poor dwarfs in the binary CS 22876-032"

Asplund+ (2006) "Lithium Isotopic Abundances in Metal-poor Halo Stars"
 
The figure in Eric L's presentation seems to be Portinari, Casagrande, Flynn (2010)'s Figure 5.

This refers to "He also far too low in local stars", Portinari, Casagrande, Flynn (2010): "Revisiting ΔY/ΔZ from multiple main sequences in globular clusters: insight from nearby stars"

The caption for Figure 5 reads:

Portinari said:
Figure 5. Metal versus helium mass fraction for nearby field dwarfs. Lines with two values of helium-to-metal enrichment ratio (break at Z = 0.015) are plotted to guide the eye. Left-hand panel: using the isochrone fitting procedure described in Casagrande et al. (2007). Right-hand panel: using numerical homology relations (see Section 4).

Casagrande+ (2007) "The helium abundance and ΔY/ΔZ in lower main-sequence stars"

The paper itself is behind a paywall, however there is an arXiv preprint. The figure in Eric L's presentation is ~the same as Karachentsev (2012)'s Figure 4.


This refers to "LCDM predicts 3x too much DM" Karachentsev (2012) "Missing dark matter in the local universe"

The caption for Figure 4 reads:

Karachentsev (2012) said:
Figure 4.
The average density of matter in the spheres of different radii (the stepped line). The squares and triangles mark the contribution of pairs, triplets, and
groups of galaxies.

(to be continued)

The figure in Eric L's presentation seems to be Figure 2 in Clowes+ (2013), not anything in Clowes+ (2012).

This refers to ">200 Mpc LSS takes far too long to form for BB" Clowes+ (2013) "A structure in the early Universe at z ˜1.3 that exceeds the homogeneity scale of the R-W concordance cosmology"

The caption for Figure 2 reads:

Clowes+ (2013) said:
Figure 2. Snapshot from a visualization of both the new, Huge-LQG, and the CCLQG. The scales shown on the cuboid are proper sizes (Mpc) at the present epoch. The tick marks represent intervals of 200 Mpc. The Huge-LQG appears as the upper LQG. For comparison, the members of both are shown as spheres of radius 33.0 Mpc (half of the mean linkage for the Huge-LQG; the value for the CCLQG is 38.8 Mpc). For the Huge-LQG, note the dense, clumpy part followed by a change in orientation and a more filamentary part. The Huge-LQG and the CCLQG appear to be distinct entities.

I will add relevant details to "Evidence indicates scattering/abs of RF radiation in local universe: (Atrophys & SS, 1993)" when Eric L provides the reference. Ditto "Disney? (voiceover, not slide)".
 
Last edited:
So, Eric, a summary of your talk looks like:

a) You took a tired-light theory that you haven't actually written down in any way. One of its predictions is vaguely consistent with your implementation of the Tolman surface brightness test. The error bars are large.

b) You don't know enough about the 20th-century hypothesis-testing machinery to quantify that "consistency" claim. You don't know enough about mainstream structure-formation theory to make a convincing claim of consistency/inconsistency with LCDM, so the best you have is an (invalid) parameter-counting/Occam's Razor argument. You don't know enough about your own tired light hypothesis to perform cross-checks with non-Tolman observables.

c) Your Occam's Razor argument is itself sort of hermetically sealed---you can only state it with a straight face if you pretend that the Hubble curve is the only evidence for LCDM's otherwise-unconstrained new physics, and if you pretend that tired light is not otherwise-unconstrained new physics. Your talk apparently gestures towards the existence of a parameter-counting argument against LCDM, but there is no such argument because LCDM is very highly overconstrained.

d) You also did some ArXiV mining for plots that you thought discredited LCDM, and in several cases you deluded yourself severely about BBN and large-scale-structure issues. This perhaps costs you credibility, don't you think? When someone has cried wolf over and over, saying "X is disproven! X is disproven!", it suggests that they have some sort of kneejerk dislike of the hypothesis, rather than that they're a good objective judge of its merits. Such a person has no credibility to make claims like "X isn't as parsimonious a hypothesis as we'd prefer!"

e) The above is apparently the best anti-LCDM argument available after 30 years of trying to construct plasma-cosmology alternative.

f) Also, some mainstream cosmologists are worried about 7Li abundances and about the possibility that we're seeing an unlikely realization of an LCDM CMB. You're right! We are already on the case on those, Eric. Maybe it's new physics. Maybe it's radical new physics. We'll learn more by constructing hypotheses and testing them against the best and largest datasets---just like we've always done.

I'm not yet at the point ben m was, a few days ago.

However, the "He also far too low in local stars", "LCDM predicts 3x too much DM", and ">200 Mpc LSS takes far too long to form for BB" seem very poorly researched, with highly relevant papers not mentioned.

"CBR alignments (etc)" is not so much poorly researched as cherry-picking; it is far from settled that the anomalies are more than just statistical flukes.

"Li declines with Fe, < 0.03 BBN prediction" refers to a well-known apparent mismatch between LCDM models (the BBN part, in this case) and observation. However, Eric L's presentation fails to mention the rich history on this.

Next: I'll take a deeper look at the first ~half of the presentation, which covers Eric L's recent "Tolman UV" paper (my shorthand). And then will likely look at the old Lerner paper cited in the second part of the presentation.
 
JeanTate said:
<SNIP>
I am a bit puzzled that no SNIa data after 2006 is included in Figure 2, nor that any statistical tests seem to have been done on the two fits (I have not yet checked the text in the paper, re this Figure).
I assume you may be referring to this (possibly amongst others):
TIME DILATION IN TYPE Ia SUPERNOVA SPECTRA AT HIGH REDSHIFT
Blondin et al, (2008): http://arxiv.org/pdf/0804.3595v1.pdf
Abstract (bolding mine):
"We present multiepoch spectra of 13 high-redshift Type Ia supernovae (SNeIa) drawn from the literature, the ESSENCE and SNLS projects, and our own separate dedicated program on the ESO Very Large Telescope. We use the Supernova Identification (SNID) code of Blondin & Tonry to determine the spectral ages in the supernova rest frame. Comparison with the observed elapsed time yields an apparent aging rate consistent with the 1/(1 +z) factor (where z is the redshift) expected in a homogeneous, isotropic, expanding universe. These measurements thus confirm the expansion hypothesis, while unambiguously excluding models that predict no time dilation, such as Zwicky’s “tired light” hypothesis. We also test for power-law dependencies of the aging rate on redshift. The best-fit exponent for these models is consistent with the expected 1/(1 +z) factor."
JeanTate said:
I am a bit puzzled that no SNIa data after 2006 is included in Figure 2, nor that any statistical tests seem to have been done on the two fits (I have not yet checked the text in the paper, re this Figure).
I was initially puzzled by that too. (I mean, the inclusion of this figure in the paper (with almost no accompanying discussion in the text) is bizarre to begin with, but why wouldn't a 2014 paper use, e.g., the Union2.1 dataset?) In terms of redshift coverage, this is not too big a deal, as a large fraction of our highest-redshift supernova catalogue is already in Lerner's catalogue. (On the other hand, our understanding of metallicity/spectra keeps improving, so the later Union catalogue would be expected to have better magnitude measurements.)

I did not have any particular paper or SNIa dataset in mind.

Take "Astier P., Guy J, Regnault N, et al., A&A 447, (2006) 31" (per ADS, Astier+ (2006) "The Supernova Legacy Survey: measurement of ΩM, ΩΛ and w from the first year data set") for example. In ADS it has been cited 1856 times! Sure, many of those cites are not papers, and many are after Lerner et al. were writing their 2014 paper. And many do not report new SNIa observations or analyses (or any kind of supernovae ones).

However, there are many papers reporting new (SNIa) observations and important new analyses. Why did Lerner et al. not include at least the bigger and most recent of the SNIa datasets in their Figure 2?

Some examples (an entirely random selection!): Gal-Yam+ (2013) "Supernova Discoveries 2010--2011: Statistics and Trends"; Feindt+ (2013) "Measuring cosmic bulk flows with Type Ia supernovae from the Nearby Supernova Factory"; "The Fundamental Metallicity Relation Reduces Type Ia SN Hubble Residuals More than Host Mass Alone".
 
I did not have any particular paper or SNIa dataset in mind.

No, I didn't in particular, but that one was quoted in Ned Wright's debunking of tired light:
http://www.astro.ucla.edu/~wright/tiredlit.htm

I am far from an expert in this particular field (planetary science, with a bit of archaeology and palaeoanthropology thrown in), but the little I have followed of this over the years, says that this whole 'tired light' concept is ruled out at very high confidence levels.
So from an outsider looking in, it wouuld seem that it would take an absolutely momentous discovery of how they are right, or everybody else is wrong, to overturn the existing data.
From what I can tell, the SN1a time dilation data kills it stone dead.
I suppose what I'm saying is; is the whole of current cosmology about to be turned on its head, or is this just more of the same from the PC brigade? i.e. not worth losing any sleep over?
 
So I've started reading Lerner+ (2014), and have some questions. Yes, some are pretty straight-forward, and yes, I can probably find the answers myself, with enough time and effort ... however, if any reader has the answers at their finger-tips ...

From the Introduction:

Lerner+ (2014) said:
In fact, in any expanding cosmology, the SB is expected to decrease very rapidly, being proportional to (1+z)-4, where z is the redshift and where SB is measured in the bolometric units (VEGA-magnitudes/arcsec−2 or erg sec-1cm−2arcsec−2). One factor of (1+z) is due to time-dilation (decrease in photons per unit time), one factor is from the decrease in energy carried by photons, and the other two factors are due to the object being closer to us by a factor of (1+z) at the time the light was emitted and thus having a larger apparent angular size. (If AB magnitudes or flux densities are used, the dimming is by a factor of (1+z)3, while for space telescope magnitudes or flux per wavelength units, the dimming is by a factor of (z+1)5). By contrast, in a static (non expanding) Universe, where the redshift is due to some physical process other than expansion (e.g., light-aging), the SB is expected to dim only by a factor (1+z), or be strictly constant when AB magnitudes are used.

Is there an easily accessible, 'for dummies' explanation of all these relationships?

Also, are the first lot true "in any expanding cosmology" (my bold)? Or are at least some only true for GR-based (or equivalent) expanding cosmologies?

Too, in all SnEUs (static, non-expanding universes ETA: a.k.a. SEU, static Euclidean universe), "where the redshift is due to some physical process other than expansion" are the last two relationships true, no matter what that "physical process" is?
 
Last edited:
This is a "best practice" question.

Lerner+ (2014) said:
In the last few decades the use of modern ground-based and space-based facilities have provided a huge amount of high quality data for the high-z Universe. The picture emerging from these data indicates that galaxies evolve over cosmic time.
Lerner+ (2014) have no references for this.

On the one hand, I guess many readers would be at least somewhat familiar with the literature on this; on the other hand, this paper was published in "International Journal of Modern Physics D", and I expect only a few readers would be familiar with the relevant literature.

From my reading of astronomy papers, I would expect to see a list of relevant references, if only "e.g. X, Y, and Z, and references therein".

What do others think?
 
<snip>

Too, in all SnEUs (static, non-expanding universes ETA: a.k.a. SEU, static Euclidean universe), "where the redshift is due to some physical process other than expansion" are the last two relationships true, no matter what that "physical process" is?

Answering my own question, No.

For example, in a process/mechanism that is hot among EU acolytes ("plasma redshift"), the scattering will produce blurring, which will certainly affect the apparent SB! Though by how much, and with what functional form (i.e. dependence on z), it is, I think, impossible to say (needless to say, no EU groupie has ever published any estimates of this effect).

And any redshfit due to some physical process/mechanism which involves scattering will also produce some effect on SB, right?
 
Once again I urge Lerner to clarify (ideally with a published reference) what his tired-light model actually says.

I am trying to construct the Hubble curve for such a model and---well, let's see. Putting in an exponential scale length for photon energy loss, and assuming that the "magnitude" measured by astronomers (which is a flux, not a fluence) is the standard-candle property---as far as I can tell the redshift vs. distance-modulus relation needs to be of the form:

D = 1 / Sqrt((1+z) ln(1+z)^2)

OK, so the question cosmologists ask about the Hubble curve is "what is the shape of the curve?" A polynomial approximation that nicely captures the cosmology is:

D = c/H (z + (1-q)/2 z^2 + ...)

where, in standard cosmology, "q" is a simple sum over the energy content of the Universe, accounting for different equations-of-state.

Tired light theory looks like it corresponds to basically q=+1. That's closest, it turns out, as the prediction of standard cosmology when Omega_lambda = 1 and Omega_m = 0 ... i.e. a flat universe containing dark energy and no matter at all. So, yes, standard cosmology *has* done that fit, and quantified it using standard statistical hypothesis testing. You can see the Omega_lambda = 1 Omega_m = 0 point on this two-axis plot:

http://supernova.lbl.gov/Union/figures/Union2.1_Om-Ol_slide.pdf

and it's ruled out; it's nowhere near the 68% confidence region, nor the 95%, nor the 99.7% confidence regions.

I repeat, if you think that I've done the tired-light prediction incorrectly (this is, after all, just a forum post) you are welcome to point me to a reliable non-YouTube source that does the math.
 
Last edited:
magnitude systems used by astronomers

In the Introduction section, Lerner+ (2014) mentions both Vega ("VEGA-magnitudes/arcsec−2 or erg sec−1 cm−2 arcsec−2") and AB ("If AB magnitudes or flux densities are used") magnitude systems.

Some readers may be unfamiliar with these; this webpage has an explanation (there are plenty of others, of course). Note that the Vega system is called the "Johnson System" on that page.

A key aspect of both systems is the colors (my bold):
  • Vega: "This system is defined such that the star Alpha Lyr (Vega) has V=0.03 and all colors equal to zero."
  • AB: "This magnitude system is defined such that, when monochromatic flux f_nu is measured in erg sec^-1 cm^-2 Hz^-1,
    m(AB) = -2.5 log(f_nu) - 48.60​
    where the value of the constant is selected to define m(AB)=V for a flat-spectrum source. In this system, an object with constant flux per unit frequency interval has zero color."

How clearly does Lerner+ (2014) make the distinctions?

"where SB is measured in the bolometric units (VEGA-magnitudes/arcsec−2 or erg sec−1 cm−2 arcsec−2": this is at least a little bit misleading ... "bolometric units" are indeed power per unit area (erg sec−1 cm−2 in this case; the arcsec−2 converts this to surface brightness), but the Vega system is no different from the AB one in this regard ... "bolometric" refers an integration over all wavelengths/frequencies, to get the total electromagnetic fluxnote1
"If AB magnitudes or flux densities are used, the dimming is by a factor of (1+z)3, while for space telescope magnitudes or flux per wavelength units, the dimming is by a factor of (z+1)5"

(to be continued)

Aside from the possible confusion re "flux" and "flux density"note1, there's the lack of reference for "space telescope magnitudes or flux per wavelength units". Presumably "space telescope" refers to the Hubble Space Telescope. But what camera, or cameras, is it referring to?

The default for WFPC2 is "Johnson Visual magnitudes", which is indeed flux per wavelength unit (source). However, for ACS three different systems are used, ST, AB, and VEGA (source).

Does any of this possible confusion matter, in terms of what's in the guts of the paper? I don't know, but I'll certainly be looking out for any such.

note1 You have to be extra careful with the terms "flux" and "flux density"; they are not defined consistently across all branches of physics, and even within astronomy you may come across different definitions. WP has a good discussion.
 
Last edited:

Back
Top Bottom