• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Plasma Cosmology - Woo or not

I don't particularly enjoy being herded around between threads but since you asked over there to have this over here:
To quote another one Lerner, Eric J., Evidence for a Non-Expanding Universe: Surface Brightness Data From HUDF March 21, 2006 -- Volume 822, pp. 60-74, 1st Crisis in Cosmology conference. And there are a good few more in rather more respected journals (than his own :) )..... but they are not relevant to this thread and belong in the plasma cosmology thread. So any replies to this a little off topic material in there, please. Dont want this thread to be another hijacked off at a tangent.
The paper itself points out the importance of galaxy evolution. This is quite messy astrophysics - in the sense that its difficult to build good models as so much is going on, basically. Its really not clear to me that his arguments against galaxy evolution being able to explain the observations are sufficiently strong - he in fact describes it as a 'tentative conclusion' himself.
Now, if the evidence for a non-expanding model was in any way competitive one might take the benefit of the simplicity of the fit to the data that Lerner shows as strong evidence, but as it is, balanced against all the other evidence, it does really look like coincidence* that the evolution of galaxy surface brightnesses gives those results.
In other words, it is an interesting observation for improving understanding galaxy formation and evolution but it does not deeply shake the foundations of cosmology.

*which one may show has physics behind it to make it less of a coincidence
 
One more nail in the coffin ...

Readers who have been following this thread for some time will perhaps recall that our fave drive-by spamster, Z, many months ago posted a raft of material on the fractal nature of the universe, and how this was consistent with PC (so he claimed; as usual, his claim withered under mild scrutiny, and he abandoned it).

Well, this recent arXiv preprint pretty much puts paid to the idea of an inhomogeneous universe ... at least as far as galaxies are concerned, and once the scale considered is greater than ... well, here's the preprint, and the abstract (enjoy):

The scale of homogeneity of the galaxy distribution in SDSS DR6
abstract said:
The assumption that the Universe, on sufficiently large scales, is homogeneous and isotropic is crucial to our current understanding of cosmology. In this paper we test if the observed galaxy distribution is actually homogeneous on large scales. We have carried out a multifractal analysis of the galaxy distribution in a volume limited subsample from the SDSS DR6. This considers the scaling properties of different moments of galaxy number counts in spheres of varying radius $r$ centered on galaxies. This analysis gives the spectrum of generalized dimension $D_q(r)$, where $q >0$ quantifies the scaling properties in overdense regions and $q<0$ in underdense regions. We expect $D_q(r)=3$ for a homogeneous, random point distribution.
In our analysis we have determined $D_q(r)$ in the range $-4 \le q \le 4$ and $7 \le r \le 98 h^{-1} {\rm Mpc}$. In addition to the SDSS data we have analysed several random samples which are homogeneous by construction. Simulated galaxy samples generated from dark matter N-body simulations and the Millennium Run were also analysed. The SDSS data is considered to be homogeneous if the measured $D_q$ is consistent with that of the random samples. We find that the galaxy distribution becomes homogeneous at a length-scale between 60 and $70 h^{-1} {\rm Mpc}$. The galaxy distribution, we find, is homogeneous at length-scales greater than $70 h^{-1} {\rm Mpc}$. This is consistent with earlier works which find the transition to homogeneity at around $70 h^{-1} {\rm Mpc}$.
Now to grasp why this is yet another nail in the PC coffin, you have to keep in mind that according to Lerner's version of PC, GR cannot be used to explain the Hubble relationship (an inhomogeneous universe is one possible 'out' in this regard), and that, going waaay back, Alfvén once considered the possibility that the universe has a quasi-fractal, hierarchical structure, such that at successively larger scales its average density falls at a rate that makes straight-forward application of GR invalid.

This latest result makes both explanations even more difficult.
 
DeiRenDopa said:
Readers who have been following this thread for some time will perhaps recall that our fave drive-by spamster, Z, many months ago posted a raft of material on the fractal nature of the universe, and how this was consistent with PC (so he claimed; as usual, his claim withered under mild scrutiny, and he abandoned it).

Well, this recent arXiv preprint pretty much puts paid to the idea of an inhomogeneous universe ... at least as far as galaxies are concerned, and once the scale considered is greater than ... well, here's the preprint, and the abstract (enjoy):

The scale of homogeneity of the galaxy distribution in SDSS DR6
abstract said:
The assumption that the Universe, on sufficiently large scales, is homogeneous and isotropic is crucial to our current understanding of cosmology. In this paper we test if the observed galaxy distribution is actually homogeneous on large scales. We have carried out a multifractal analysis of the galaxy distribution in a volume limited subsample from the SDSS DR6. This considers the scaling properties of different moments of galaxy number counts in spheres of varying radius [LATEX]$r$[/LATEX] centered on galaxies. This analysis gives the spectrum of generalized dimension [LATEX]$D_q(r)$[/LATEX], where [LATEX]$q >0$[/LATEX] quantifies the scaling properties in overdense regions and [LATEX]$q<0$[/LATEX] in underdense regions. We expect [LATEX]$D_q(r)=3$[/LATEX] for a homogeneous, random point distribution.
In our analysis we have determined [LATEX]$D_q(r)$[/LATEX] in the range [LATEX]$-4 \le q \le 4$[/LATEX] and [LATEX]$7 \le r \le 98 h^{-1} {\rm Mpc}$[/LATEX]. In addition to the SDSS data we have analysed several random samples which are homogeneous by construction. Simulated galaxy samples generated from dark matter N-body simulations and the Millennium Run were also analysed. The SDSS data is considered to be homogeneous if the measured [LATEX]$D_q$[/LATEX] is consistent with that of the random samples. We find that the galaxy distribution becomes homogeneous at a length-scale between 60 and [LATEX]$70 h^{-1} {\rm Mpc}$[/LATEX]. The galaxy distribution, we find, is homogeneous at length-scales greater than [LATEX]$70 h^{-1} {\rm Mpc}$[/LATEX]. This is consistent with earlier works which find the transition to homogeneity at around [LATEX]$70 h^{-1} {\rm Mpc}$[/LATEX].
Now to grasp why this is yet another nail in the PC coffin, you have to keep in mind that according to Lerner's version of PC, GR cannot be used to explain the Hubble relationship (an inhomogeneous universe is one possible 'out' in this regard), and that, going waaay back, Alfvén once considered the possibility that the universe has a quasi-fractal, hierarchical structure, such that at successively larger scales its average density falls at a rate that makes straight-forward application of GR invalid.

This latest result makes both explanations even more difficult.

Corrected quoted text,so Latex is working... (added LATEX tags)
 
Drive by post

Abstract: For more than a half century cosmologists have been guided by the assumption that matter is distributed homogeneously on sufficiently large scales. On the other hand, observations have consistently yielded evidence for inhomogeneity in the distribution of matter right up to the limits of most surveys. The apparent paradox can be understood in terms of the role that paradigms play in the evolution of science.
http://arxiv.org/ftp/arxiv/papers/0805/0805.2643.pdf



WMAP catastrophe
This month, we’ve chosen to highlight a paper that is causing a stir in cosmology. Serious doubt is cast upon the validity of the entire body of WMAP analysis. Thanks to Eric Lerner for the following analysis:
An important new paper shows that there are serious errors in the WMAP team’s analysis of the satellite’s data. The new paper, Observation number correlation in WMAP data By Ti-Pei Li et al, which has been accepted by MNRAS, shows that a spurious apparent temperature is introduced into the map of the CMB by the WMAP team’s analyses. As a result, the conclusions based on this analysis, including the widely-publicized supposed agreement with some predictions of the dominant LCDM cosmology, are thrown into doubt. Li et al’s recent paper on WMAP observation number effects arXiv 0905.0075 is a follow-up to Liu and Li’s earlier paper on the same subject, 0806.4493, which was reported in this newsletter, but whose significance was not fully recognized at the time.
WMAP mapped the tiny variations of anisotropies in the CMB by comparing the inputs of two receivers or horns placed 141 degrees apart, as the satellite spun and scanned the entire sky. Complex mathematical procures were used to transform these differences in inputs into a map of absolute temperature or intensity at every point in the sky. In outline the authors argue that:
1. The way temperature is calculated by the WMAP team based on the differential between the two WMAP horns is in error, as is best explained in the Li et al paper. When the number of observations of a given pixel by the “plus” horn (the number of times that point in the sky is scanned) is different than the number of observation by the “minus” horn, there is a spurious temperature added, dependent on transmission imbalances, which are different for different bands. (Esq. 5 and 6 of Li et al). These spurious temperatures, up to 10-20 micros K are clearly shown in figure 3, which shows the pixel-by-pixel correlation of the
2
difference in observation number and temperature. This spurious temperature, dependent on observation number, in turn produces a spurious fluctuation in temperature which is dependent on the number of observations. The number of observations in turn is a strong function of declination. See figure 2 of Liu and Li, which tells the story very well. Li explains procedures by which the raw data can be re-analysed to eliminate these artefacts.
2. The method by which WMAP temperatures are calculated also does not accurately correct for the fact that pixels 141 degrees way from hot spots are measured too cold. In Liu and Li, p.18, they show that pixels 141 degrees away from the 2000 hottest pixels in the map are on average 12-14 micro Kelvin cooler than average pixels, depending on the band. This is several hundred times above the expected random variation. Since each circle contains 15,000 pixels spread across a good section of the sky, the average temperature should be very close to the average of the whole sky. This is even truer for 2,000 such circles. But that is not what Liu and Li found.
So, from these papers, it seems that there are spurious temperature anisotropies that are comparable with the entire anisotropy found in the WMAP team’s maps. Therefore the entire analysis of cosmological parameters based on these maps is wrong. Indeed it seems very puzzling that an analysis that is so contaminated with errors should come up with parameters anywhere near those expected by LCDM models. The fact that the Li et al paper was accepted by MNRAS is perhaps an indication that some of the leading journals are becoming more open to work that challenges conventional assumptions in cosmology.
13. Title: Observation number correlation in WMAP data
Authors: Ti-Pei Li, Hao Liu, Li-Ming Song, Shao-Lin Xiong, Jian-Yin Nie
arXiv:0905.0075
http://www.cosmology.info/newsletter/2009.06.pdf
 
Abstract: For more than a half century cosmologists have been guided by the assumption that matter is distributed homogeneously on sufficiently large scales. On the other hand, observations have consistently yielded evidence for inhomogeneity in the distribution of matter right up to the limits of most surveys. The apparent paradox can be understood in terms of the role that paradigms play in the evolution of science.
http://arxiv.org/ftp/arxiv/papers/0805/0805.2643.pdf



WMAP catastrophe
This month, we’ve chosen to highlight a paper that is causing a stir in cosmology. Serious doubt is cast upon the validity of the entire body of WMAP analysis. Thanks to Eric Lerner for the following analysis:
An important new paper shows that there are serious errors in the WMAP team’s analysis of the satellite’s data. The new paper, Observation number correlation in WMAP data By Ti-Pei Li et al, which has been accepted by MNRAS, shows that a spurious apparent temperature is introduced into the map of the CMB by the WMAP team’s analyses. As a result, the conclusions based on this analysis, including the widely-publicized supposed agreement with some predictions of the dominant LCDM cosmology, are thrown into doubt. Li et al’s recent paper on WMAP observation number effects arXiv 0905.0075 is a follow-up to Liu and Li’s earlier paper on the same subject, 0806.4493, which was reported in this newsletter, but whose significance was not fully recognized at the time.
WMAP mapped the tiny variations of anisotropies in the CMB by comparing the inputs of two receivers or horns placed 141 degrees apart, as the satellite spun and scanned the entire sky. Complex mathematical procures were used to transform these differences in inputs into a map of absolute temperature or intensity at every point in the sky. In outline the authors argue that:
1. The way temperature is calculated by the WMAP team based on the differential between the two WMAP horns is in error, as is best explained in the Li et al paper. When the number of observations of a given pixel by the “plus” horn (the number of times that point in the sky is scanned) is different than the number of observation by the “minus” horn, there is a spurious temperature added, dependent on transmission imbalances, which are different for different bands. (Esq. 5 and 6 of Li et al). These spurious temperatures, up to 10-20 micros K are clearly shown in figure 3, which shows the pixel-by-pixel correlation of the
2
difference in observation number and temperature. This spurious temperature, dependent on observation number, in turn produces a spurious fluctuation in temperature which is dependent on the number of observations. The number of observations in turn is a strong function of declination. See figure 2 of Liu and Li, which tells the story very well. Li explains procedures by which the raw data can be re-analysed to eliminate these artefacts.
2. The method by which WMAP temperatures are calculated also does not accurately correct for the fact that pixels 141 degrees way from hot spots are measured too cold. In Liu and Li, p.18, they show that pixels 141 degrees away from the 2000 hottest pixels in the map are on average 12-14 micro Kelvin cooler than average pixels, depending on the band. This is several hundred times above the expected random variation. Since each circle contains 15,000 pixels spread across a good section of the sky, the average temperature should be very close to the average of the whole sky. This is even truer for 2,000 such circles. But that is not what Liu and Li found.
So, from these papers, it seems that there are spurious temperature anisotropies that are comparable with the entire anisotropy found in the WMAP team’s maps. Therefore the entire analysis of cosmological parameters based on these maps is wrong. Indeed it seems very puzzling that an analysis that is so contaminated with errors should come up with parameters anywhere near those expected by LCDM models. The fact that the Li et al paper was accepted by MNRAS is perhaps an indication that some of the leading journals are becoming more open to work that challenges conventional assumptions in cosmology.
13. Title: Observation number correlation in WMAP data
Authors: Ti-Pei Li, Hao Liu, Li-Ming Song, Shao-Lin Xiong, Jian-Yin Nie
arXiv:0905.0075
http://www.cosmology.info/newsletter/2009.06.pdf
This is about the Lambda-CDM theory (not plasma cosmology) and so I have copied it there.
 
Abstract: For more than a half century cosmologists have been guided by the assumption that matter is distributed homogeneously on sufficiently large scales. On the other hand, observations have consistently yielded evidence for inhomogeneity in the distribution of matter right up to the limits of most surveys. The apparent paradox can be understood in terms of the role that paradigms play in the evolution of science.

Like creationists, the EU/PC advocates interpret/spin any 'problem' with the mainstream theory as evidence for their claims.

Even if this issue is correct, we still do not see Peratt's galaxy-powering currents in the WMAP raw data! Remember, the CMB map is constructed from maps in five other wavelengths and we should stlll see Peratt's currents there.

Tom
 
Like creationists, the EU/PC advocates interpret/spin any 'problem' with the mainstream theory as evidence for their claims.


Some PC advocates. I'll refer you to this post where the I'm going to (shortly) outline the predictions that PC made and how they compare to observations now and how they differ from Big Bang predictions, with links to the relevent publications.
 
Some PC advocates. I'll refer you to this post where the I'm going to (shortly) outline the predictions that PC made and how they compare to observations now and how they differ from Big Bang predictions, with links to the relevent publications.
Cygnus X-1 - I will refer you to my reply to Zeuzzz's post.
Hopefully the PC predictions Zeuzzz comes up will be good qualitative and falsifiable predictions.

Hopefully the PC papers that Zeuzzz comes up will be evidence for PC rather than any none relevant evidence against BBT. The papers that he cites in this post are about Li abundances in metal-poor stars being lower than previously calculated. Strangely there are no attempts in the cited papers to fix this using PC.


He may have trouble deciding whether a prediction belongs to PC or not since the definition of PC seems fairly subjective. The consensus in this thread seems to be that PC is a collection of any theory that assumes that
  • EM effects "dominates" (enough so the gravity can be ignored?) and
  • The universe is not expanding and
  • The universe is eternal.
(the ands are in bold since they could be and/or depending on which PC advocate you read), even if the theories are mutually exclusive.
 
Some PC advocates. I'll refer you to post where the I'm going to (shortly) outline the predictions that PC made and how they compare to observations now and how they differ from Big Bang predictions, with links to the relevent publications.

Unfortunately I can't post URLs yet.

The link you provide gives NO PC prediction of Li abundances.

Again, we do not see Peratt's current streams in WMAP raw data. See the recent post in my blog "Scott Rebuttal. II. The Peratt Galaxy Model vs. the Cosmic Microwave Background". Even Peratt's own predictions reveal this problem.

Tom
 
The link you provide gives NO PC prediction of Li abundances.


Okay heres some quickly if its links you want to sift through:

http://en.wikipedia.org/w/index.php?title=Plasma_cosmology&oldid=88918621#Light_elements_abundance
Light elements abundance

The structure formation theory allowed Lerner to calculate the size of stars formed in the formation of a galaxy and thus the amounts of helium and other light elements that will be generated during galaxy formation.[37] This led to the predictions that large numbers of intermediate mass stars (from 4-12 solar masses) would be generated during the formations of galaxies. Standard stellar evolution theory indicates that these stars produce and emit to the environment large amounts of helium-4, but very little carbon, nitrogen and oxygen.

The plasma calculations, which contained no free variables, led to a broader range of predicted abundances than big bang nucleosynthesis, because a process occurring in individual galaxies would be subject to individual variation.[38] The minimum predicted value is consistent with the minimum observed values of 4He abundance.[39] In order to account for the observed amounts of deuterium and various isotopes of lithium, Eric Lerner has posited that cosmic rays from the early stars could, by collisions with ambient hydrogen and other elements, produce the light elements unaccounted for in stellar nucleosynthesis.[40]

*E. J. Lerner, "On the problem of big-bang nucleosynthesis", Astrophys. Space Sci. 227, 145-149 (1995). E.J. Lerner,

* Galactic Model of Element Formation," IEEE Transactions on Plasma Science, Vol. 17, No. 3, April 1989, pp. 259‑263.

* A comparison of plasma cosmology and the Big Bang Plasma Science, IEEE Transactions on Volume 31, Issue 6, Dec. 2003 Page(s): 1268 - 1275 [full text]
 

If you are going to read these (a bit old papers) from Eric Lerner's papers (a plasma/fusion scientist) then you also need to read the following about his book "The Big Bang Never Happened" which covers the same topics:
A problem raised with Eric Lerner's prediction of Li abundances is that it is wrong given the current distribution of stellar masses (where supernovae produce considerable amounts of CNO compared to helium). His reply is "However, the detailed models and calculations presented in my papers showed that the early galaxies were dominated by intermediate mass stars too small to create supernovae. These stars produce and blow off an outer layer of helium but very little or no CNO is released to the interstellar medium [E.J. Lerner, IEEE Transactions on Plasma Science, Vol. 17, , pp. 259-263]".
I do not have access to his paper so I do not know where it contains an actual model of galaxy formation that forms intermediate mass stars or whether the intermediate mass stars are an assumption in the model.

However the consensus in astronomy is that Population III stars existed in the early galaxies in order to create elements for the Population II stars. These are massive stars estimated to be 10 to 100 solar masses and could be very massive(hundreds of solar masses) .
 
Okay heres some quickly if its links you want to sift through:

I actually have all these references in my collection.

The statement

"The plasma calculations, which contained no free variables, , led to a broader range of predicted abundances than big bang nucleosynthesis, because a process occurring in individual galaxies would be subject to individual variation."

is almost oxymoronic. If you have no free variables, how do you have individual variation?

In "Galactic Model of Element Formation", Lerner tosses together nucleosynthesis results from other researchers who are not using plasma cosmology initial conditions. Estimates of elemental yields will probably be very different compared to BBN and standard stellar model calculations. If Lerner means that element building is taking place in 'shock waves' during galaxy formation, this would have a radically different composition compared to the fast expanding 'cooking' of BBN vs. slow, steady 'cooking' in stellar interiors vs. fast events in novae & supernovae. Lerner fails to explicitly show these shocks produce the required abundances. Most of Lerner's work is assembled piecemeal using 'rules of thumb' that were established under conditions very different from the plasma cosmology model.

But not even that matters as Peratt's galaxy model clearly predicted we should see 'spaghetti' like filaments of microwave emission from the current streams and these do not appear in COBE or WMAP (see "Electric space: evolution of the plasma universe" by A. Peratt, pg 101).

Tom
 
I actually have all these references in my collection.

The statement

"The plasma calculations, which contained no free variables, , led to a broader range of predicted abundances than big bang nucleosynthesis, because a process occurring in individual galaxies would be subject to individual variation."

is almost oxymoronic. If you have no free variables, how do you have individual variation?

In "Galactic Model of Element Formation", Lerner tosses together nucleosynthesis results from other researchers who are not using plasma cosmology initial conditions. Estimates of elemental yields will probably be very different compared to BBN and standard stellar model calculations. If Lerner means that element building is taking place in 'shock waves' during galaxy formation, this would have a radically different composition compared to the fast expanding 'cooking' of BBN vs. slow, steady 'cooking' in stellar interiors vs. fast events in novae & supernovae. Lerner fails to explicitly show these shocks produce the required abundances. Most of Lerner's work is assembled piecemeal using 'rules of thumb' that were established under conditions very different from the plasma cosmology model.

But not even that matters as Peratt's galaxy model clearly predicted we should see 'spaghetti' like filaments of microwave emission from the current streams and these do not appear in COBE or WMAP (see "Electric space: evolution of the plasma universe" by A. Peratt, pg 101).

Tom


Tom, If you have found viable issues with Plasma Cosmology and the models proposed in the literature by Alfvén, Birkeland, Peratt, Lerner, Verchuur, et al, then I suggest that you write up such observations and get them published in a journal so the scientists in question can reply. They have afterall published all their material in journals for the world to look at. I'm sure that Lerner, Peratt, et al, would come up with answers to your your critisisms.

As someone who has:

originally read Eric
Lerner’s The Big Bang Never Happened while in graduate school studying for my Ph.D. in
astrophysics. More recently, I’ve read Anthony Peratt’s Physics of the Plasma Universe
(Springer-Verlag, 1992). I expected something similar to a reasonable, popular-level update to
the latest claims of the plasma cosmology crowd.


What has shifted your view of late from plasma cosmology being reasonable to not?
 
Tom, If you have found viable issues with Plasma Cosmology and the models proposed in the literature by Alfvén, Birkeland, Peratt, Lerner, Verchuur, et al, then I suggest that you write up such observations and get them published in a journal so the scientists in question can reply. They have afterall published all their material in journals for the world to look at. I'm sure that Lerner, Peratt, et al, would come up with answers to your your critisisms.

As someone who has:
[ stuff deleted ]

What has shifted your view of late from plasma cosmology being reasonable to not?

Why did my opinion of Peratt's work change? When I first read, BBNH, early 1990s while in graduate school, the COBE results were just released. In principle, a number of solutions to cosmological problems were still 'on the table'. Peratt did some interesting work and back then he did it largely correct - he spelled out his model and then explored the implications and other predictions it made. While this model did a reasonable job matching galaxy rotation curves, there were loads of other consequences of his model, many mentioned on this thread and discussed on my site, which failed to match existing observations. Mathematical correctness is not evidence that reality works that way, it requires validation against experiments & observations: Experiment without theory is tinkering. Theory without experiment is numerology.

Note that the EU crowd likes to complain about other physicists & astronomers relying on mathematical models, but if it is a model developed by Peratt or Lerner, it seems to have their support in spite of it's failed predictions in areas where we do have good measurements.

I have no problems with researchers proposing ideas that might be regarded as outlandish. The most successful researchers do it often, then they explore possible implications of the idea. There are several possible consequences of this:

1) predictions match observations in a way that are indistinguishable from the current model, in which case one must search for *differences* that the different models imply and find experiments our observations on which to test. Some experiments might require time to improve experimental sensitivity to fully validate it. Dark matter is in this category - loads of observations are consistent with the existence of a particle below our current experimental sensitivity - we just haven't isolated the particle yet.

2) additional predictions don't match observations at all. This means the researcher should move on to other ideas.

3) Additional predictions and new observations match well.

4) Very often what happens is that some subset of the idea might be determined to have a contribution in the mainstream theory and becomes incorporated into existing theory.

A number of researchers have had significant careers exploring unusual ideas and I'm actually a fan of these, such as the recent paper by Ryskin (Gregory Ryskin. "Secular variation of the earth’s magnetic field: induced by the ocean flow?" New Journal of Physics, 11(6):063015 (23pp), 2009). Why don't you search the ADS site for Stirling Colgate who explored a number of 'off the wall' ideas? Colgate pursued these ideas while condition (1) applied and abandoned if he hit condition (2). A few of them made it to condition (3) and some made it to (4). The cranks and crackpots hang on to ideas even when they are stuck in (2). Plasma cosmology is in state (2) and has therefore been abandoned.

As for publishing some of my material in professional journals, I have considered it.

* ApJ would probably not accept such paper since it would imply they might have to accept a rebuttal that would be below their standards.
* I now regard the ITPS review standards as questionable. After all, they let the Thornhill paper ("The Z-Pinch Morphology of Supernova 1987A and Electric Stars." IEEE Transactions on Plasma Science, 35:832–844, August 2007. doi: 10.1109/TPS.2007.895423.) appear in their publication. Among the many problems with this paper was the use of 30-year old neutrino & helioseismology data (the Young Earth creationists used this same data but some have since moved it to a list of arguments they should not use). Over the past 30 years, we've developed newer & better neutrino detectors plus have over ten years of high-time resolution helioseismology data from space using SOHO/MDI which have resolved these issues.

I don't rule out attempting a 'research grade' publication from this, but the big issue the EU and PC guys aren't doing 'research grade' astronomy. Many of analyses published by Lerner, Peratt, and others, are the level of more advanced homework problems for a graduate-level astrophysics course. Today, desktop class machines and small clusters can run many of the simulations which Peratt needed a supercomputer to run in the 1980s.
 
The november newsletter is out, not really plasma cosmology but more issues with the big bang, etc.

http://www.cosmology.info/newsletter/2009.11.pdf
Plasmas
Plasma physics is a vital component of our understanding of the cosmos, given that plasmas occur ubiquitously in space. The following paper is of great significance to plasma cosmologists, and gives exact equations for a variety of phenomena in laboratory plasmas. The recent discovery of Bethe’s CNO nuclear fusion cycle at the foot points of solar coronal arches (2005, astro-ph/0512633) emphasises the importance in astrophysics of the connection between thermonuclear energy and electromagnetism. “The importance of the thermomagnetic Nernst effect for the problem to stabilize plasma by a vorticity containing shear flow, becomes important in the vicinity of cold walls where the temperature gradient is large, but can conceivably also become important in the presence of thermonuclear reactions, where the reaction rate goes with a high power of the temperature.”
[102]gen-phys arXiv:0910.1626
Title: Shear Flow Stabilization of a z-Pinch Plasma in the Presence of a Radial Temperature Gradient
Authors: F.Winterberg


As for this thread ... I think that I've said everything I know on this subject. Any further queries I could likely reply to by just re-quoting previous answers I have given. And Ziggurat if you are reading would be nice of you to address my reply about the longest ranging force in the universe.
 
Last edited:
As for this thread ... I think that I've said everything I know on this subject. Any further queries I could likely reply to by just re-quoting previous answers I have given. And Ziggurat if you are reading would be nice of you to address my reply about the longest ranging force in the universe.
I doubt that Ziggurat would be interested in a post about gravity ("the longest ranging force in the universe") that is several months old.
The post is especially outdated because you must by now have relaized that Anthony Peratt's Plasma Model of Galaxy Formation is so wrong.
 
Indeed yes, it falls down in too many areas to be a serious contendor. But similar ideas based on 1/r laws (instead of 1/r^2) and using more complex initial mass distributions than a simple high mass core could prove fruitful in the future.
Though none exist to date that I know of, using either EM or a modfication of gravity.

Maybe you could answer the question about the longest acting force in the universe Reality Check?
 

Back
Top Bottom