Evidence against concordance cosmology

How can the universe not be expanding if GR is right? Some of you have asked. Well first of all, as scientists, we take observation over theory and the Tolman test is about observations. But really it is NOT true that GR predicts an expanding universe. The equations of GR make absolutely no prediction by themselves about expansion, collapse or neither. You can add assumptions about initial conditions and then you get expansion, or contraction or neither. What Einstein did find out, and this bothered him, is that a UNIFORM distribution of matter, if it is big enough, should collapse (gravity attracts, right? Collapse, not expansion). So he put in the cosmological constant but was not happy that this created an unstable universe prone to either collapse or expansion.

There are two ways to avoid this problem. First, as some of my colleagues believe (we can get into this later) the distribution of matter in the universe may be fractal at all scales. In that case, it will not be unstable to expansion of contraction at any scale. To put it another way, at no scale will the universe be close to a critical density.

Second, we have hypothesized that the Hubble relation comes from a loss of energy by photons as they travel—presumably due to a newly-discovered feature of EM. Since photons carry the EM fields, that means the fields weaken with distance at distances of Gpc. This may well be true of all fields, including gravity—a long distance modification of GR. Yes, this hypothesizes new physics. But it is physics that we can test experimentally. A redshift due to distance, not expansion could be detected with spacecraft similar to the planned LISA project.

And of course space can be flat, Euclidean on large scales and curved on small scales. Just think about a nice flat bed sheet with a few marbles under it.
 
Occam's razor again

When I mentioned the four additional parameters need to explain the surface brightness data, I was talking about the parameters needed to describe the hypothesized evolution in size of both disk and elliptical galaxies. All four of these parameters would be ad-hoc without physical justification, because the published, physically motivated theories of size evolution do not fit the actual data. In contrast, the non-expanding hypothesis requires no free parameters to fit the data. It predicts no change in surface brightness and that is what we see. So this is why I ask, how people can think a four-parameter theory is a better fit to THIS data than a zero-parameter one. I'll talk about other data sets later on.
 
And of course space can be flat, Euclidean on large scales and curved on small scales. Just think about a nice flat bed sheet with a few marbles under it.

I prefer an analogy where I haven't lost my marbles.
 
Thanks to Jean Tate and Ben m for posting the actual references. Not sure why they are not in the slides—might be the wrong version of the slides went into the video. Hope you found some other interesting stuff in digging these up! Anyway, I will post the missing ones here as soon as I get a moment—maybe tomorrow.

<snip>
Sure.

Note that I have not tried to match figures (etc) to what's in the first part of your presentation; I assume it's all in your "UV Tolman" paper (my shorthand).

Also, I did not try to find the references to your own papers, in the second ~half of your presentation.
 
Reality Check said:
One thing we know about the universe that we live in is that it is not Euclidean.
Just curious RC. How do you know that?
I want to close this little sub-thread out, by saying that I have learned, once again, that it is really important to be clear about the details ... first.

I think - hope - that all those who've joined in in this sub-thread are now in ~agreement, and that there are quite a few pieces to this "is not Euclidean" thing! :D

One thing I learned is that it is - apparently - possible to have a Euclidean space that is 'expanding'! :) I wonder what the coordinates for such a thing might be?
 
How can the universe not be expanding if GR is right? Some of you have asked.
Myself, I was happier with this thread being limited to just what it says in the title. However, as the presentation you started with - as a link in the OP - includes alternative cosmological models (or similar), I fear we will spend a lot of time going over what's in the old Plasma Cosmology - Woo or not thread.

And discussing GR in the context of such alternatives is a biggie.

Well first of all, as scientists, we take observation over theory and the Tolman test is about observations. <snip>
So is "the CMB". Observations of which you barely touched on in your presentation. Does discussing these observations make this thread "Evidence against concordance (plasma) cosmology?"

Also, as I think has been noted, "as scientists, we take observation over theory" is at best a not very useful oversimplification. When discussing (most of) astronomy, and all of cosmology.

Why?

Because almost all observations are so steeped in theory that if you remove all theory you're left with an unintelligible mess. And this is quite evident in Sbordone+ (2012), Hansen+ (2015), Portinari, Casagrande, Flynn (2010), Karachentsev (2012), and Clowes+ (2013) (it's also likely evident in all the other primary references in your presentation).

So a much more sensible starting point is, I submit, to first work hard to get agreement on what theories will be taken for granted.

And I don't think we can avoid this ... just as it was not possible to avoid it in the "Plasma Cosmology" thread.
 
<snip>
Next is "Lithium, Helium predictions are "clearly" wrong."
No citations to literature saying that Li is wrong. His comments seem ignorant of the fact that astronomers know the production of Li in stars needs to be accounted for to get the BBN amount.
<snip>
The science is Big Bang nucleosynthesis

So Lerner is wrong about He. Issues with Li were known and seem to have been fixed.
<snip>

The 7Li abundance deficit is very, very well known and is in my mind the "most serious" problem with LCDM cosmology.
Just how "very, very well known" can be seen by looking at the number of papers ADS records are being referenced by Spite&Spite (1982) ("Abundance of lithium in unevolved halo stars and old disk stars - Interpretation and consequences") ... it's 686! Two of which, as you'd expect are Sbordone+ (2012) and Hansen+ (2015).

And plenty of those 686 were published after 2012 (and many after Hansen+ (2015)'s paper too).

As to whether "Issues with Li [...] seem to have been fixed": I am not familiar with the literature, but at least one recent paper does seem to show that getting at the primordial abundance of Li-7 is, um, challenging, Korn (2012). From the abstract "It will be shown that it is primarily our incomplete knowledge of the temperature scale of Population II stars which currently limits the diagnostic power of globular clusters as regards the stellar-surface evolution of lithium."

From RC's reference:

Korn+ (2006) "A probable stellar solution to the cosmological lithium discrepancy"; the abstract (my bold):
The measurement of the cosmic microwave background has strongly constrained the cosmological parameters of the Universe. When the measured density of baryons (ordinary matter) is combined with standard Big Bang nucleosynthesis calculations, the amounts of hydrogen, helium and lithium produced shortly after the Big Bang can be predicted with unprecedented precision. The predicted primordial lithium abundance is a factor of two to three higher than the value measured in the atmospheres of old stars. With estimated errors of 10 to 25%, this cosmological lithium discrepancy seriously challenges our understanding of stellar physics, Big Bang nucleosynthesis or both. Certain modifications to nucleosynthesis have been proposed, but found experimentally not to be viable. Diffusion theory, however, predicts atmospheric abundances of stars to vary with time, which offers a possible explanation of the discrepancy. Here we report spectroscopic observations of stars in the metal-poor globular cluster NGC6397 that reveal trends of atmospheric abundance with evolutionary stage for various elements. These element-specific trends are reproduced by stellar-evolution models with diffusion and turbulent mixing. We thus conclude that diffusion is predominantly responsible for the low apparent stellar lithium abundance in the atmospheres of old stars by transporting the lithium deep into the star.

Charbonnel&Primas (2005) "The lithium content of the Galactic Halo stars"; the abstract (my bold):
Thanks to the accurate determination of the baryon density of the universe by the recent cosmic microwave background experiments, updated predictions of the standard model of Big Bang nucleosynthesis now yield the initial abundance of the primordial light elements with unprecedented precision. In the case of ^7Li, the CMB+SBBN value is significantly higher than the generally reported abundances for Pop II stars along the so-called Spite plateau. In view of the crucial importance of this disagreement, which has cosmological, galactic and stellar implications, we decided to tackle the most critical issues of the problem by revisiting a large sample of literature Li data in halo stars that we assembled following some strict selection criteria on the quality of the original analyses. In the first part of the paper we focus on the systematic uncertainties affecting the determination of the Li abundances, one of our main goal being to look for the "highest observational accuracy achievable" for one of the largest sets of Li abundances ever assembled. We explore in great detail the temperature scale issue with a special emphasis on reddening. We derive four sets of effective temperatures by applying the same colour {T}_eff calibration but making four different assumptions about reddening and determine the LTE lithium values for each of them. We compute the NLTE corrections and apply them to the LTE lithium abundances. We then focus on our "best" (i.e. most consistent) set of temperatures in order to discuss the inferred mean Li value and dispersion in several {T}_eff and metallicity intervals. The resulting mean Li values along the plateau for [Fe/H] ≤ 1.5 are A(Li)_NLTE = 2.214±0.093 and 2.224±0.075 when the lowest effective temperature considered is taken equal to 5700 K and 6000 K respectively. This is a factor of 2.48 to 2.81 (depending on the adopted SBBN model and on the effective temperature range chosen to delimit the plateau) lower than the CMB+SBBN determination. We find no evidence of intrinsic dispersion. Assuming the correctness of the CMB+SBBN prediction, we are then left with the conclusion that the Li abundance along the plateau is not the pristine one, but that halo stars have undergone surface depletion during their evolution. In the second part of the paper we further dissect our sample in search of new constraints on Li depletion in halo stars. By means of the Hipparcos parallaxes, we derive the evolutionary status of each of our sample stars, and re-discuss our derived Li abundances. A very surprising result emerges for the first time from this examination. Namely, the mean Li value as well as the dispersion appear to be lower (although fully compatible within the errors) for the dwarfs than for the turnoff and subgiant stars. For our most homogeneous dwarfs-only sample with [Fe/H] ≤ 1.5, the mean Li abundances are A(L)_NLTE = 2.177± 0.071 and 2.215±0.074 when the lowest effective temperature considered is taken equal to 5700 K and 6000 K respectively. This is a factor of 2.52 to 3.06 (depending on the selected range in {T}_eff for the plateau and on the SBBN predictions we compare to) lower than the CMB+SBBN primordial value. Instead, for the post-main sequence stars the corresponding values are 2.260±0.1 and 2.235±0.077, which correspond to a depletion factor of 2.28 to 2.52. These results, together with the finding that all the stars with Li abnormalities (strong deficiency or high content) lie on or originate from the hot side of the plateau, lead us to suggest that the most massive of the halo stars have had a slightly different Li history than their less massive contemporaries. In turn, this puts strong new constraints on the possible depletion mechanisms and reinforces Li as a stellar tomographer.

One reason why observed lithium abundances are so difficult to convert to estimates of primordial abundance: lithium is unusual (unique?) among light elements in that its two stable isotopes - Li6 and Li7 - can be both created (e.g. via cosmic ray spallation) and destroyed (e.g. by being 'burned' in MS stars) in known astrophysical environments, but only Li7 is expected to be created in the BBN epoch.
 
I've got some supernova data queued up now and and trying to figure out what Lerner's theory actually says. More on this later.

So this is why I ask, how people can think a four-parameter theory is a better fit to THIS data than a zero-parameter one. I'll talk about other data sets later on.

First: again, Eric, you show a bizarre faith in a weird half-baked version of "parameter counting", which is (as I think I have said) a small aspect of a large world of statistical hypothesis testing, and does not work the way you seem to think. There is no generic rule of "we prefer few-parameter theories". There is a rule "we prefer theories that most accurately match the constraints", and parameter-counting is one aspect of making the "accurately match" determination.

Second: (b) your "four parameter" thing appears to come from your own totally-physics-free attempt to invent a polynomial, or a set of broken powerlaws, or something, that go through the data. That's not a parameterization anyone cares about, and it has no relationship to the complexity required by an actual theory of galaxy size evolution. (It's like: imagine handing a student a set of Poisson-distributed histogram, but they don't know about Poisson processes so they start trying to describe it with polynomial and Gaussian features. "OK, to account for the steep early rise we need a start parameter, curvature parameter, and peak height, and for this long tail we need a width and a skewness. Five parameters at least." Totally meaningless number.)
 
Eric L: Please cite the performance of the concordance Tolman test in your paper

I started with the Tolman test ...
And once again you miss a basic fact about your paper, Eric L :p! Your paper contains no evidence against concordance cosmology.

I will make this easy to understand with a simple question: B]Eric L[/B]: Please cite the performance of the Tolman surface brightness test against the concordance model in Lerner et. el.

Your paper contains evidence for a toy model of the universe that anyone can see is non-physical - the universe is not Euclidean :jaw-dropp!

The Tolman test is a powerful test but it is not just about geometry unless your level of knowledge is limited to Wikipedia articles :D! For example surface brightness varies as galaxies age.
The people that have done the Tolman surface brightness test get that it
  • Supports concordance cosmology (Lubin and Sandage papers)
  • Invalidates any tired light model (Lubin and Sandage papers)
  • Rules out all except concordance cosmology and tired light models (http://goo.gl/xe4vwL)).
  • Supports an unphysical model of a static Euclidean universe (Lerner et al).
 
How can the universe not be expanding if GR is right? Some of you have asked. ...
I can find no such question, Eric L. There has been a discussion about why your irrelevant paper about a static Euclidean model is not about this universe. That is easy to undemand: GR is non-Euclidean.

We have known for a century that GR allows for a contracting, static or expanding universe. It is the overwhelming evidence for an expanding universe that selects an expanding universe. That include Tolman surface brightness test that support the concordance model.

We have known for many decades that tired light theories (" loss of energy by photons as they travel") are invalid.

And of course it is ignorant to assert that space "can be flat, Euclidean" on any scales because Special Relativity exists! That means that spacetime has to be a Minkowski spacetime at least. Add matter and energy and you have to consider GR.
 
Occam's razor actually

When I mentioned the four additional parameters need to explain the surface brightness data, ...
Unfortunately the post title reflects the ignorance we see from actual physics cranks, Eric L :D.
Occam's razor is a criteria to distinguish between equivalent theories. When we have theories that predict the same things then the theory with the fewer entities should be favored. (N.B. That does not mean that this theory is right or the other theories wrong.) Scientists like simpler theories because they are easier to test.

When there is a "zero-parameter" theory that obviously does not describe the actual universe then Occam's razor is not needed. That theory is wrong.

ETA: When we have the concordance model which has an enormous amount of evidence supporting it and a "zero-parameter" theory that has not been tested against that enormous evidence then we throw away the "zero-parameter" theory.
 
Last edited:
At 21:34, Lerner cites Clowes et al , 2012 for ">200 Mpc LSS Takes Far Too Long to Form for BB".
This seems to be A structure in the early universe at z ~ 1.3 that exceeds the homogeneity scale of the R-W concordance cosmology. This and a couple of other larger structures are a problem for the Lambda-CDM model.
<snip>
Regarding the "anomalous" large-scale structures, the Clowes 2013 discovery was in the news at the time and is not terribly convincing. Accidental structure of this type is indeed created by statistically-homogenous data all the time. http://arxiv.org/abs/1306.1700 does the analysis:
I show that the algorithm used to identify the Huge-LQG regularly
finds even larger clusters of points, extending over Gpc scales, in explicitly homogeneous simulations of a Poisson point process with the same density as the quasar catalogue.


<snip>
Clowes+ (2013) has been cited 44 times, per ADS.

In addition to the paper ben m cited, which is among the 44, there's Sapone+ (2014) and Park+ (2015).

Sapone+ (2014) "Curvature versus distances: Testing the FLRW cosmology"; here's the abstract (my bold):
We test the FLRW cosmology by reconstructing in a model-independent way both the Hubble parameter H(z) and the comoving distance D(z) via the most recent Hubble and Supernovae Ia data. In particular we use data binning with direct error propagation, the principal component analysis, the genetic algorithms and the Padé approximation. Using our reconstructions we evaluate the Clarkson et al. test known as ΩK(z), whose value is constant in redshift for the standard cosmological model, but deviates otherwise. Using present data, we find good agreement with the expected values of the standard cosmological model within the experimental errors, which are, however, large enough to allow for alternative cosmologies. A discrimination between the models may be possible for more precise future data. Finally, we provide forecasts, exploiting the baryon acoustic oscillation measurements from the Euclid survey.

Park+ (2015) "Large SDSS Quasar Groups and Their Statistical Significance"; here's the abstract (my bold):
We use a volume-limited sample of quasars in the Sloan Digital Sky Survey (SDSS) DR7 quasar catalog to identify quasar groups and address their statistical significance. This quasar sample has a uniform selection function on the sky and nearly a maximum possible contiguous volume that can be drawn from the DR7 catalog. Quasar groups are identified by using the Friend-of-Friend algorithm with a set of fixed comoving linking lengths. We find that the richness distribution of the richest 100 quasar groups or the size distribution of the largest 100 groups are statistically equivalent with those of randomly-distributed points with the same number density and sky coverage when groups are identified with the linking length of 70 h-1 Mpc. It is shown that the large-scale structures like the huge Large Quasar Group (U1.27) reported by Clowes et al. (2013) can be found with high probability even if quasars have no physical clustering, and does not challenge the initially homogeneous cosmological models. Our results are statistically more reliable than those of Nadathur (2013), where the test was made only for the largest quasar group. It is shown that the linking length should be smaller than 50 h-1 Mpc in order for the quasar groups identified in the DR7 catalog not to be dominated by associations of quasars grouped by chance. We present 20 richest quasar groups identified with the linking length of 70 h-1 Mpc for further analyses.

So, unlike the "Li-7 problem", RC's "This and a couple of other larger structures are a problem for the Lambda-CDM model" and what Eric L presents (in the relevant section) seems not so well supported by the relevant literature.
 
At 20:38, Lerner gets onto dark matter and starts with a misrepresentation.
I. D. Karachentsev, Astrophys. Bull. 67, 123-13 is Missing dark matter in the local universe (04/2012). This is not a problem with the Lambda-CDM prediction for the DM throughout the universe. It is a problem about the measurement of the local density of dark matter. The paper suggests solutions.
Thanks for doing Eric's job for him, JeanTate.

JeanTate said:
The paper itself is behind a paywall, however there is an arXiv preprint. The figure in Eric L's presentation is ~the same as Karachentsev (2012)'s Figure 4.

OK, this is another easy followup. Yes, the local group does seem to be below the average cosmic density ... surprising no one. LCDM straightforwardly predicts that the Universe has wildly varying local density, with galaxies found in superdense clusters and thin filaments and in near-voids; in fact, finding yourself in a region of "the average density", is an atypical experience. (A similar example: how many Americans live in a county with the national average density of 90 people per square mile?)

Anyway, http://adsabs.harvard.edu/abs/2014MNRAS.445..988N runs the detailed calculation. Yes, local-underdensities like the observed local one are perfectly common outcomes of LCDM initial conditions and normal GR evolution, and particularly common for spiral galaxies.
A look at the papers Karachentsev cites, and papers which cite Karachentsev (2012), suggests that there is a fairly small group of people working in this sub-sub-subfield.

RC writes "The paper suggests solutions." Indeed. In section 3 ("The Problem of Missing Dark Matter"), Karachentsev writes:
Karachentsev (2012) said:
Thus, the most refined methods of estimating the virial mass in systems of different size and population lead to the value of the local (D≤50Mpc) average density of matter of Ωm,loc = 0.08±0.02, what is 3–4 times lower than the global value of Ωm,glob = 0.28±0.03 in the standard ΛCDM cosmology [30, 31]. Various possible explanations of this contradiction were proposed in the literature.
We shall list three of them here.
1) Dark matter in the systems of galaxies extends far beyond their virial radius, so that the total mass of a group/cluster is 3–4 times larger than the virial estimate.
2) The diameter of the considered region of the Local universe, 90 Mpc, does not correspond to the true scale of the “homogeneity cell”; our Galaxy may be located inside a giant void sized about 100–500 Mpc, where the mean density of matter is 3 to 4 times lower than the global value.
3) Most of the dark matter in the Universe, or about two thirds of it is not associated with groups and clusters of galaxies, but distributed in the space between them in the form of massive dark clumps or as a smooth “ocean.”
And he proceeds to examine each in some detail.

Interestingly, he looked at each in isolation; there seems to me to be no real discussion of the possibility of all three being 'in play'. This I find to be rather common, a desire to find a single (dominant) cause, rather than many factors.

Karachentsev (2005) is a good background paper; the last sentence in the abstract reads:
Karachentsev (2005) said:
To remove the discrepancy between the global and local quantities of Ωm, we assume the existence of two different DM components: (1) compact dark halos around individual galaxies and (2) a nonbaryonic dark matter ``ocean'' with ΩDM1~=0.07 and ΩDM2~=0.20, respectively.

One interesting paper which cites Karachentsev (2012) is Nuza+ (2014) "The cosmic web of the Local Universe: cosmic variance, matter content and its relation to galaxy morphology"; here's the abstract (my bold) (this is the paper ben m cites):
Nuza+ (2014) said:
We present, for the first time, a Local Universe (LU) characterization using high-precision constrained N-body simulations based on self-consistent phase-space reconstructions of the large-scale structure in the Two-Micron All-Sky Galaxy Redshift Survey. We analyse whether we live in a special cosmic web environment by estimating cosmic variance from a set of unconstrained ΛCDM simulations as a function of distance to random observers. By computing volume and mass filling fractions for voids, sheets, filaments and knots, we find that the LU displays a typical scatter of about 1σ at scales r ≳ 15 h-1 Mpc, in agreement with ΛCDM, converging to a fair unbiased sample when considering spheres of about 60 h-1 Mpc radius. Additionally, we compute the matter density profile of the LU and we have found a reasonable agreement with the estimates of Karachentsev only when considering the contribution of dark haloes. This indicates that observational estimates might be biased towards low-density values. As a first application of our reconstruction, we investigate the likelihood that different galaxy morphological types inhabit certain cosmic web environments. In particular, we find that, irrespective of the method used to define the web, either based on the density or the peculiar velocity field, elliptical galaxies show a clear tendency to preferentially reside in clusters as opposed to voids (up to levels of 5.3σ and 9.8σ, respectively) and conversely for spiral galaxies (up to levels of 5.6σ and 5.4σ, respectively). These findings are compatible with previous works, albeit at higher confidence levels.

Pavel Kroupa is well known for his papers pointing out discrepancies between observation and LCDM models, so it's no surprise to find one, by him, citing Karachentsev (2012). Kroupa (2012) "The Dark Matter Crisis: Falsification of the Current Standard Model of Cosmology"; here's the abstract:
Kroupa (2012) said:
The current standard model of cosmology (SMoC) requires The Dual Dwarf Galaxy Theorem to be true according to which two types of dwarf galaxies must exist: primordial dark-matter (DM) dominated (type A) dwarf galaxies, and tidal-dwarf and ram-pressure-dwarf (type B) galaxies void of DM. Type A dwarfs surround the host approximately spherically, while type B dwarfs are typically correlated in phase-space. Type B dwarfs must exist in any cosmological theory in which galaxies interact. Only one type of dwarf galaxy is observed to exist on the baryonic Tully-Fisher plot and in the radius-mass plane. The Milky Way satellite system forms a vast phase-space-correlated structure that includes globular clusters and stellar and gaseous streams. Other galaxies also have phase-space correlated satellite systems. Therefore, The Dual Dwarf Galaxy Theorem is falsified by observation and dynamically relevant cold or warm DM cannot exist. It is shown that the SMoC is incompatible with a large set of other extragalactic observations. Other theoretical solutions to cosmological observations exist. In particular, alone the empirical mass-discrepancy-acceleration correlation constitutes convincing evidence that galactic-scale dynamics must be Milgromian. Major problems with inflationary big bang cosmologies remain unresolved.

In the last year or so quite a number of low-surface brightness (LSB) galaxies have been discovered, most of them with low estimated masses.

The ones which are estimated to be satellites of our own galaxy are bringing the observed number closer to the estimated number, based on detailed cosmological simulations (most of which take as their starting point the estimated state of the universe at the time of creation of the CMB). They are - IIRC - extremely DM-dominated.

Perhaps more interesting, in terms "locally missing DM", is the discovery of quite a few such LSB galaxies in the direction of the Virgo cluster (some will surely turn out to be not part of that cluster); for example, Giallongo+ (2015) "The Detection of Ultra-faint Low Surface Brightness Dwarf Galaxies in the Virgo Cluster: A Probe of Dark Matter and Baryonic Physics".

I think it is likely that the next few years will see the discovery of a lot more such LSB galaxies. This will go some way to addressing this challenge, re "Dark Attractors" (source is Karachentsev (2012)): "For obvious reasons, the hypothesis of the existence between galaxy groups and clusters of a large number of invisible dark halos with different masses is difficult to prove observationally."
 
One thing I am struggling to understand, with Lerner's "tired light" cosmology:

We know that light from distant, redshifted supernovae is spread out over time. A supernova at z=0 with a monthlong outburst? If you see a spectrally-identical event at z=1, it has a two-month-long outburst.

(In fact, they're observed to spread out over time by exactly the same time stretch that appears in individual photon frequencies. In LCDM that is an obvious predicted effect.)

If I want to write down a tired-light model, I understand the tired-light energy loss on an individual photon level: "The source emitted a 6 eV photon but it only had 3 eV left when it arrives, so the detected energy is E0/(1+z)". (ETA: what I mean, is I understand the fact that this is what I'm supposed to be modeling. I do not understand that as the plausible outcome of some new microphysics.) But if I'm looking at a whole supernova---well, let's compare.

In LCDM, identical supernovae have identical emitted-photon counts. If there's an event emitting N photons, each at energy E0 at a distance D and redshift z, I detect N/D^2 of those photons but they're each at E0/(1+z).

In tired-light, I'm not sure. It is an observational fact that high-z supernova photons are detected for a longer period of time.

a) Do you think that's a real effect ("those really were longer-lasting supernovae by a factor of (1+z)", or more generally "longer by some factor we can parameterize in z, for which 1+z is the best fit")? In that case, the total emitted photon count was N*(1+z), the photon energies are E0/(1+z). The supernova's bolometric luminosity is then N/D^2 E0/(1+z), but its bolometric fluence is N/D^2. (Note that a non-bursty standard candle just has luminosity N/D^2 E0/(1+z) in this case, magically matching the supernova.)

b) Do you think that "new tired-light physics" puts variable time-delays on the photons? So the photons are emitted over time T, but have their arrivals spread out to T*(1+z)? (Or, sorry, "have their arrivals spread out by some unknown z-dependent factor we'll have to fit to the data") In that case the bolometric luminosity is N/D^2 E0/(1+z)^2 while the bolometric fluence is N/D^2 E0/(1+z). Note that a time-independent candle isn't dimmed by mere timeshifting, so its bolometric luminosity is N/D^2 E0/(1+z) with just the photon-energy-loss correction.

c) If we're putting in a time-shift effect in empty space (as in answer b) surely this is experienced independently photon-by-photon. An individual photon has no way of knowing whether it's from the early edge of a supernova or the late edge. An actual "dT --> dT(1+z)" factor sounds highly implausible in this sense; surely any real photon-delays-in-empty-space would be a time-smearing (with *sigma* proportional to z, maybe) rather than a coherent stretch. Please show how a tired-light model's time smearing is parameterized, and show whether you've found any parameter choices that agree with the data.

(Maybe you don't want to do this yet, but we *have* to decide how the photon-arrival-time stretching works, because until you know that you don't know how to treat supernovae as standard candles.)

d) If we're allowed to invent new physics allowing perfectly-achromatic, perfectly-collinear "redshifting" at all frequencies, surely we have lost some confidence in any of our other priors about photon propagation through free space. Please document the set of different assumptions, hypotheses, and constraints that you've considered for the "tired light" behavior itself.
 
Last edited:
RC does not seem to have commented on Portinari, Casagrande, Flynn (2010) (PCF10, for short) (please point to such a comment, if I missed it).

<snip>

I read the Portinari paper and, wow, that is an incredibly roundabout way of estimating helium abundances, and exactly no one seems to think it's telling us anything about the cosmological He abundance (which is measured well elsewhere)---where did you fish that up? Don't tell me you just pulled out Figure 5 and called it a "helium abundance measurement" or something?
I agree.

This paper seems to be primarily about the fit of stellar evolution models (which produce isochrones), and 'homology relations', to observations. It is mostly about stars in three MW globular clusters, and how various modifications of models affect the fits (this is broad summary), with some local stars used as a method to check the fits.

It's pretty clear that this general topic concerns models of stars, and to what extent the standard models need modification to incorporate physics known to be relevant, in real stars (e.g. rotation; PCF10, in Section 6, covers ~a half dozen of these).

There's a strong hint that PCF10 do not take estimated subprimordial helium abundances as pointing to a real 'He deficit' (in Section 5):
PCF10 said:
While awaiting for a solution to this problem by improved stellar physics (see our final discussion), here we adopt a very pragmatic approach: we assume that ΔY/ΔZ ∼2 also for any Z<Z, and empirically calibrate homology relations so that this result is returned for nearby stars. Our assumption is very reasonable, since HII region measurements and chemical evolution models support a constant ΔY/ΔZ ∼2 down to very low Z (e.g. Peimbert et al. 2007; Carigi & Peimbert 2008), as does the following simple argument:
taking the solar bulk abundances (Asplund et al. 2009) and the primordial YP = 0.240 (Steigman 2007), one derives ΔY/ΔZ = 2.1 for Z < Z.

This quote from the Introduction points to a method Eric L could have used, to show more directly that estimated He abundances in at least stars is subprimordial (my bold):
PCF10 said:
In spite of its large abundance, helium is an elusive element. It can be measured directly, from spectroscopic lines, only in stars hotter than ∼10 000 K: young, massive stars (or their surrounding HII regions) or blue horizontal branch (HB) stars (where though, apart from a small temperature range, the surface abundance may not trace the original one, due to helium sedimentation and metal levitation: Michaud, Vauclair & Vauclair 1983; Michaud, Richer & Richard 2008; Villanova, Piotto & Gratton 2009, and references therein). Only indirect methods can be used for stars of lower mass and cooler temperatures, which constitute the bulk of the Galaxy’s stellar population.
 
Here's what I found:

JeanTate said:
Li declines with Fe, < 0.03 BBN prediction: Sbordone+ (2012), Hansen+ (2015)
Sbordone+ (2012): "Lithium abundances in extremely metal-poor turn-off stars" The figure in Eric L's presentation seems to be ~the same as Figure 4 in this paper.

Hansen+ (2015): "An Elemental Assay of Very, Extremely, and Ultra-metal-poor Stars"

He also far too low in local stars: Portinari, Casagrande, Flynn (2010)
Portinari, Casagrande, Flynn (2010): "Revisiting ΔY/ΔZ from multiple main sequences in globular clusters: insight from nearby stars"

LCDM predicts 3x too much DM: I.D. Karachentsev, Astrophs. Bull. 67, 123-134
Karachentsev (2012): "Missing dark matter in the local universe"

>200 Mpc LSS takes far too long to form for BB: Clowes+ (2012)
Clowes+ (2012): "Two close large quasar groups of size ˜350 Mpc at z ˜1.2"

There's also Clowes+ (2013), which ben m cited: "A structure in the early Universe at z ˜1.3 that exceeds the homogeneity scale of the R-W concordance cosmology"

CBR alignments (etc): (no refs)

Evidence indicates scattering/abs of RF radiation in local universe: Atrophys & SS, 1993
Nothing for the former, obviously.

"Atrophys & SS" seems to be a typo; perhaps Eric L meant "Astrophysics and Space Science", the journal?

If so, then there were 12 volumes published in 1993, from 199 to 210. I do not intend to find out which paper, or papers, Eric L is referring to.

Free Parameters exceed measurements: Disney? (voiceover, not slide)
I did not try to track this down.

<snip>

Thanks to Jean Tate and Ben m for posting the actual references. Not sure why they are not in the slides—might be the wrong version of the slides went into the video. Hope you found some other interesting stuff in digging these up! Anyway, I will post the missing ones here as soon as I get a moment—maybe tomorrow.

<snip>

When Eric L posts these, I will follow up the ones I've not yet commented on, starting with the 'CBR alignments' and 'Evidence indicates scattering/abs of RF radiation in local universe'.
 
RC does not seem to have commented on Portinari, Casagrande, Flynn (2010) (PCF10, for short) (please point to such a comment, if I missed it).
A bit hidden away in my post
No citations to literature saying that He is wrong. Goes on about He in local stars being "far too low".
More clearly: Lerner provides no citations to literature saying that calculations of primordial He abundance in the Big Bang are wrong or that observations contradict the calculations. All we have is his unsupported assertion that the abundance of He in local stars (as in PCF10) can be used to predict the primordial abundance and is "far too low".

Lerner needs to learn how primordial He abundance is actually derived from observations. For example, they look at hot HII regions in dwarf galaxies to eliminate the need to model He abundance in stars.
Helium-4 and Dwarf Galaxies
All investigations of this type result in a primordial helium-4 mass fraction of 24 per cent, with an uncertainty of little more than one per cent. Not bad for a straightforward extrapolation. In detail, however, there are still disagreements between the different research groups that have performed such evaluations; by a conservative estimate, the true value lies somewhere between 23.2 and 25.8 per cent.
The BB model prediction is 25%.
 

Back
Top Bottom