Why is there so much crackpot physics?

Why strikes me most about Farsight (and that other PM/crank who posts here) is that they seem to think that applying elements of their project management techniques to scientific exploration will reap rewards.
It's BurntSynapse who advocates project-management techniques. I've never seen Farsight do so.
 
Arp's claims confirmed in actual lab tests:


http://www.thunderbolts.info/forum/phpBB3/viewtopic.php?f=3&t=6305


Now that plasma redshift has actually been observed and documented in the lab, the mainstream sky deities are toast.

http://www.sciencedirect.com/science/article/pii/S0030402608000089

The mainstream never had a lick of empirical support to justify any of their claims, but now they have a strong physical laboratory refutation to deal with on top of of their numerous qualification problems. Say so long to mainstream theory, it's about to die a natural empirical death. There has never was a single laboratory observation that was a bigger threat to mainstream theory than that observation of plasma redshift in the lab IMO. Lyndon Ashmore has already written a pretty good paper showing how these laboratory redshift results are not only predicted in PC/EU theory, this successful prediction of PC/EU theory absolutely destroys the credibility of the mainstream's claims related to expansion and acceleration.

http://vixra.org/pdf/1105.0010v1.pdf

I'm sitting here trying to figure out exactly what the mainstream is going to do with this relatively new (last few years) successful prediction of PC theory over the long haul. I'm sure they will continue to ignore it for awhile yet, but the empirical handwriting is now on the wall. There is now a FULLY EMPIRICAL explanation for the redshift phenomenon.

Considering the fact that astronomers claim to be actual "scientists", you'd think that they'd be the first ones to jump all over that redshift observation in the lab, but NOOOOOOO! OMG what a joke BB theory has become now. Lambda-CDM is 96 percent metaphysical BS and only 4 percent actual physics. Worse yet, their whole basis for claiming that expansion/acceleration has ever been "observed' has been stripped from them entirely. The only thing they actually ever "observed" were redshifted photons and there is already a very simple, very empirical, already demonstrated CAUSE for that phenomenon.

The mainstream may not know it yet, but Big Bang theory was actually falsified by those plasma redshift observations in the lab. The mainstream doesn't have an empirical leg to stand, and not even a good "explanation" for redshift on anymore. Plasma redshift observations from the lab are the sayonara song of mainstream theory IMO. It's just a matter of time....
 
http://vixra.org/pdf/1105.0010v1.pdf

Recent developments in laser induced plasma have shown that the characteristic recombination lines from atoms within the plasma itself are redshifted. Importantly, the experimental results show that the redshift of these lines increases with the free electron density of the plasma. Long predicted by exponents of alternative theories to the Big Bang, these intrinsic redshifts produced by plasma in the laboratory give credence to such theories.
 
icebear:
1, vixra links are not going to hold water. vixra is for stuff that you can't even get on the arxiv, and the arxiv isn't peer reviewed (although much on it is through separate processes).
2, tired light just doesn't work. The reasons why have been explained to death. I feel sorry for Ben M if he has to go through them all over again, as I recall he's done excellently before.
3, I just wanted to mention how tickled I was by this quote in the vixra article (which I read for giggles)
An estimated value of ne in the IG space can be achieved from the WMAP data [12] and gives ne = 2.2x10-7 cm-3 or an average of 0.22 electrons per metre cubed. Thus this New Tired Light theory gives a predicted value of H as 0.9x10-18 s-1 or 27 km/s per Mpc.
So he's using a reference to a paper that (if you chase it through) references further papers for the calculation, those papers supporting standard big bang cosmology, giving a figure of 0.044 for the baryon fraction Ωb assuming a H0 of 71km/s/Mpc and he uses those values to support a completely different cosmology with no dark matter, a completely different origin for the photons the observations of which are used to give that value, and uses this to support a value of H0 of... get this... 27 km/s/Mpc! It's... well... if I use the word 'impressive' please don't misinterpret me.
 
You had raised the objection against my advocacy for alternate formulations in physics to resolve anomalies (Q's were offered as a possible reformulation) based on the claim that alternate representations would produce "literally the same thing in different notation". Thus, recommending exploration of alternate formulations is unwarranted.

I agreed with your claim. I pointed out that while true, it missed the point I was advocating in that the calculations we would be likely or able to perform greatly depend on the notation methods we choose, as your examples illustrated

You missed one detail: Calculations look different from others until you distill down to raw group theory, and then you're at the core. Binary arithmetic looks different than decimal arithmetic, but they're both representations of the group of integers under addition. Maybe it's possible that "thinking about rotations and reflections of a cube" inspires different calculations than "thinking about ways to traverse a tetrahedron", but "thinking about the finite group Sym(4)" is, inherently, doing both.

History shows we are simply unlikely to try or look for things that seem implausible under older formulations.

That does not mean that "Hey guys! Try new formulations!" is a productive piece of advice. Especially insofar as today's physicists have been specifically trained to seek new formulations all the time.

My question: Do we agree that if we build decimal-logic computers, or forced people to do computer-related arithmetic in binary, many results we currently consider trivial would not be around for us put in alternate representations?

No, I don't agree. I mean, if we did arithmetic in binary, then elementary schools would not teach the "all multiples of nine have a digit sum which is also a multiple of nine" fact. (Although this sort of fact is known to mathematicians in---guess what?---general formulations that are true in all bases.) If we did arithmetic in binary, schoolkids would make different mistakes than they make now. If we forced computers to use decimal, no one would have invented the fast inverse square root, but they probably would have invented equally-useful versions with bitwise arithmetic.

If so, and if history is any guide, it suggests reconfiguring the categories of a science (via alternate representations) is a key characteristic of revolutionary advance which is generally-accepted as needed in physics.

If there exist transformative advances in any STEM discipline which do not feature such recategorization, I would be interested to learn of them.

The problem with your management scheme: knowing you need a recategorization is easy. Getting the right one is hard.

Physicists already know that present-day theories probably are a limited version---a low-energy limit, or a subgroup, or a 4D projection, or a set of emergent statistical properties---of a theory we haven't seen. They spend all day attempting to think about current theories in different ways.



on closer review and based on information systems project management criteria, it seems the most promising reformulation in many years: the amplituhedron.

Hey, look! Something physicists discovered, using physicist methods and physicist motivations over the course of a decade, gradually converged on something that was widely recognized, by physicists, using physicists' own version of "project management", to be important/interesting.

And you have read a popular article about it---in which the journalist presumably interviews a physicist saying "this is a very important reformulation and may point to new truths about spacetime and will be pursued". And you look back and declare retrospectively that your IT-management-principles have labeled this as something to be pursued?

I've asked this questions repeatedly: what do your management techniques do differently than what physicists do already? Because this discovery qualifies as "the sort of thing physicists already do". It looks like that IT-management-free method has discovered good things. Do you think your technique would have done even better? I'm more than a little skeptical of that.

In fact, I might hazard a guess that your management might have downgraded the priority of the study of Grassmannians, which (until the 2012-ish excitement) would have appeared, to you, to be a boring pursuit of "routine" science in the boring, inside-the-box, and non-revolutionary business of Yang-Mills theory. If, in 2004, you had been looking for possibilities for a paradigm shift in the study of spacetime, you would have looked away from "Coplanarity In Twistor Space Of N=4 Next-To-MHV One-Loop Amplitude Coefficients" by Britto, Cachazo, and Feng http://arxiv.org/abs/hep-th/0411107 . What if the IT-based analysis had diverted funding away from that and towards (picking from the 400+ hep-th uploads on that day's arxiv) "The Exact Geometry of a Kerr-Taub-NUT Solution of String Theory" or "Dark Entropy: Holographic Cosmic Acceleration" or "Chromogravity - An Effective Diff(4,R) Gauge for the IR region of QCD"?
 
...
...
Last week, a colleague at NASA forwarded me a new mathematical construct I initially poo-pooed, but on closer review and based on information systems project management criteria, it seems the most promising reformulation in many years: the amplituhedron.

The fact that you "initially poo-pooed" this is quite telling. What would be your basis for this -- as non-mathematician -- as a non-physicist -- as a layman?
Are you suggesting that "project management" by non-physicists had anything to do with this discovery? If not, what is the point? If so, please provide some evidence for this claim.
 
Icebear, I have read the actual experimental article that your ViXrA link attempts to analyze. The experiments are reasonable, but the ViXrA interpretation is completely moronic. If you plug in the numbers, the effect that Chen observes disproves the use of Stark shifts to match the Arp model---by showing

(a) that the line shift is NOT simply proportional to wavelength (as with observed redshifts) but has some different value for every atom and every atomic line,

(b) that the effect is tiny, whereas Arp's speculations need a huge effect, and

(c) that the effect is pressure and temperature dependent, while Arp's cosmology would have needed some "regional" effect that applies equally to (dense) stars, (near-vacuum) interstellar gas clouds, and (medium-density) quasars.

(d) that the plasma effect causes both line shifts and line broadening, proving that it can't be responsible for cosmological redshifts, where we see redshifts in narrow lines, whose absence of broadening shows that the electron density is low.

Not that this is surprising. Please note that Chen is merely the latest in many measurements of this effect---the key equation in this paper (for the expected size of the lineshift) is cited as coming from a 1976 textbook.

To defend the assertion that Ashmore's paper is moronic, note how he speculates that all of Chen's interpretations are wrong because he only talks about "electron density". If you read Chen's paper---and you read the text, not just the figure captions---you will see that he talks about BOTH electron and ion density, and both figure into his equations. Ashmore apparently barely read the paper he thinks he's using to demolish cosmology.

What did you read, Icebear?
 
Arp's claims confirmed in actual lab tests:
Sorry, icebear, but citing the cranks at thuinderbolts is an automatic epic fail :eye-poppi!

Intrinsic Plasma Redshifts Now Reproduced In The Laboratory – a Discussion in Terms of New Tired Light (PDF). is a crackpot paper.
Citing any of the vixra "pre-prints" (they are never actually published as far as I have seen) is another automatic fail, icebear.

Tired light theories do not work in the real universe
Errors in Tired Light Cosmology

Lyndon Ashmore's fantasy about the Investigation of the mechanism of spectral emission and redshifts of atomic line in laser-induced plasmas paper is just that.

ben m labels Lyndon Ashmore's "pre-print" as moronic since it does not fit the real world and shows his inability to even understand the Chen et. al. paper.
I would label Lyndon Ashmore's "pre-print" as moronic based just on its content
  • "by ‘eye’ there is a linear relationship for all but one of the data points."
    Real science is not done by 'eye'.
  • That linear relationship leads to the ridiculous situation of a redshift for an electron density of zero, i.e. no plasma causing this "intrinsic plasma redshift".
  • "However, overall, plasma is electrically neutral" which is true for large enough scales.
    But Chen's explanation is about the ionized atoms in plasmas which are not "electrically neutral".
  • The intergalactic medium is currently measured as 1,000,000,000,000,000,000,000,000 thinner than the plasma in Chen et.al. So the laboratory effect is not expected in the real world.
  • Cosmological redshift is measured from stars, not plasma excited by lasers!
  • The moronic act of "predicting" the CMB and not actually predicting it!
    There is no microwave radiation derived in section 5.
  • "Interestingly, the CMB has a black body form of radiation and it is known that plasma emit Black Body radiation as the clouds will be in thermal equilibrium" is just ignorant.
    Plasma does not emit Black Body radiation. The best you can get is nearly Black Body radiation from thick plasma such as in stellar photospheres.
 
icebear:

3, I just wanted to mention how tickled I was by this quote in the vixra article (which I read for giggles)
An estimated value of ne in the IG space can be achieved from the WMAP data [12] and gives ne = 2.2x10-7 cm-3 or an average of 0.22 electrons per metre cubed. Thus this New Tired Light theory gives a predicted value of H as 0.9x10-18 s-1 or 27 km/s per Mpc.
So he's using a reference to a paper that (if you chase it through) references further papers for the calculation, those papers supporting standard big bang cosmology, giving a figure of 0.044 for the baryon fraction Ωb assuming a H0 of 71km/s/Mpc and he uses those values to support a completely different cosmology with no dark matter, a completely different origin for the photons the observations of which are used to give that value, and uses this to support a value of H0 of... get this... 27 km/s/Mpc! It's... well... if I use the word 'impressive' please don't misinterpret me.

That is wonderful isn't it, in its ability to produce gales of laughter.

Ashmore has been around internet discussion forums for quite a while it seems (so Google tells me), so it's not like he worked out all this nonsense in a vacuum. That he went ahead and wrote it up, knowing full well that it won't fly speaks volumes, wouldn't you say icebear?
 
Icebear, I have read the actual experimental article that your ViXrA link attempts to analyze. The experiments are reasonable, but the ViXrA interpretation is completely moronic. If you plug in the numbers, the effect that Chen observes disproves the use of Stark shifts to match the Arp model---by showing

(a) that the line shift is NOT simply proportional to wavelength (as with observed redshifts) but has some different value for every atom and every atomic line,

(b) that the effect is tiny, whereas Arp's speculations need a huge effect, and

(c) that the effect is pressure and temperature dependent, while Arp's cosmology would have needed some "regional" effect that applies equally to (dense) stars, (near-vacuum) interstellar gas clouds, and (medium-density) quasars.

(d) that the plasma effect causes both line shifts and line broadening, proving that it can't be responsible for cosmological redshifts, where we see redshifts in narrow lines, whose absence of broadening shows that the electron density is low.

Not that this is surprising. Please note that Chen is merely the latest in many measurements of this effect---the key equation in this paper (for the expected size of the lineshift) is cited as coming from a 1976 textbook.

To defend the assertion that Ashmore's paper is moronic, note how he speculates that all of Chen's interpretations are wrong because he only talks about "electron density". If you read Chen's paper---and you read the text, not just the figure captions---you will see that he talks about BOTH electron and ion density, and both figure into his equations. Ashmore apparently barely read the paper he thinks he's using to demolish cosmology.

What did you read, Icebear?
I'm going to hazard a guess: icebear won't answer. After all, s/he hasn't answered any questions lately, so why start now? Especially as answering sensibly requires icebear to read and understand both Ashmore's document and the Chen paper.

But, I could be wrong. Icebear, would you care to prove me wrong?
 
I couldn't help but think of another odd idea that Farsight has expressed.

That an "elementary" particle that decays is not truly elementary because of that. Thus, the likes of the W and Z are events and not really particles, because they are so evanescent.

But the proper comparison is to some intrinsic time, and a good intrinsic time is the Compton period, h/(m*c2). Leaving out factors of 2 and pi,

(mean life)/(Compton period) ~ (mass)/(width)

Here are some mass-to-width values:
Muon: 3.5*10^(17)
Tau lepton: 7.8*10^(11)
Top quark: 86
(other quarks hadronize)
W particle: 39
Z particle: 37
Higgs particle: ~ 30,000

So these are all legitimate particles.
 
You missed one detail: Calculations look different from others until you distill down to raw group theory, and then you're at the core. Binary arithmetic looks different than decimal arithmetic, but they're both representations of the group of integers under addition. Maybe it's possible that "thinking about rotations and reflections of a cube" inspires different calculations than "thinking about ways to traverse a tetrahedron", but "thinking about the finite group Sym(4)" is, inherently, doing both.
I'm not sure why you think I miss this, since I've repeatedly agreed with your earlier support for this point, and we both agree with this latest example. Yes, reformulations doing the same inherent work is a valid.

What I'm uncertain on is whether we also agree that there are other considerations apart from the inherent work, such as context, cost, scope, risk, etc. A familiar historical example may make this more clear:

Geocentric, heliocentric, and pure math calculations (as used by Mesoamericans) of a given sunrise are each doing inherently the same work and obtain identical results. Yet it is generally regarded the availability and use of a particular method significant. Do we agree this general regard is valid?


That does not mean that "Hey guys! Try new formulations!" is a productive piece of advice. Especially insofar as today's physicists have been specifically trained to seek new formulations all the time.
I agree 100% with both claims. The only part I dispute is that the training is consistent with the best work of the last 20 years in history and philosophy of scientific revolutions.

(Although this sort of fact is known to mathematicians in---guess what?---general formulations that are true in all bases.) If we did arithmetic in binary, schoolkids would make different mistakes than they make now. If we forced computers to use decimal, no one would have invented the fast inverse square root, but they probably would have invented equally-useful versions with bitwise arithmetic.
Yes, 100% agreement.

The problem with your management scheme: knowing you need a recategorization is easy. Getting the right one is hard.
Actually, there exists widespread ignorance and misconception about paradigm change that is non-trivial to overcome, but your point is well taken: that's easy compared to getting the right reformulation of our models.

Physicists already know that present-day theories probably are a limited version---a low-energy limit, or a subgroup, or a 4D projection, or a set of emergent statistical properties---of a theory we haven't seen. They spend all day attempting to think about current theories in different ways.

This general "our theory is limited in some ways" has always be somewhat true within modern science, hasn't it? As for specifics, I argue an obvious risk is the way fundamental dimensions are defined today.

Absent evidence, I refuse to believe that 10,000 years ago the Egyptian followers of Anubis and Isis got fundamental dimensions of cosmological reality correct on their first guess without even trying.

That argument is presented here: http://www.youtube.com/watch?v=tuHmUrpd9Ww

...you look back and declare retrospectively that your IT-management-principles have labeled this as something to be pursued?
Information systems (my field) often includes IT, but otherwise, yes. Your comment suggests a problem arises if administration analyzes newly developed techniques from a management science perspective.

I've asked this questions repeatedly: what do your management techniques do differently than what physicists do already?
And I've answered repeatedly: management techniques don't "do" physics, they alter how work is done. Application of management science alters how, when, and of what quality of work is done.

Just like geocentrism vs. heliocentrism, cosmological modeling with or without any technique can be validly said to be "doing the same inherent work", depending on the perspective.

Because this discovery qualifies as "the sort of thing physicists already do". It looks like that IT-management-free method has discovered good things.
Good? They're the best in the history of the world!!

On the other hand, does the effort, cost, and quality of our efforts to resolve problems which have continued for more than 100 years suggest nothing is amiss? Businesses would NEVER tolerate such ineffectual investments.

Do you think your technique would have done even better? I'm more than a little skeptical of that.
If by "better", we are allowed to include things outside the knowledge itself, such as cost, duration, and quality of results, then my answer would be yes.

I agree with the many studies, one even cited by critics (Standish's Chaos Report) that application of well-proven management practices improves performance. What sort of evidence would you find compelling?

This seems like the same value good navigation provides a ship seeking its next port, even though lots of ships without navigation going out for many years would eventually find that same port. From the perspective of "did the ship reach the port?" both results are inherently identical with the arriving ship being just as much "in this port" either way.

Maybe the video will clarify, the presentation was fairly well received despite my poor delivery, for which I apologize to any viewers.
 
Last edited:
I recently read Jim Baggott’s Farewell to Reality. First, the book provides an overview of the state of knowledge of modern physics (more than half the book). Then, the author goes into a rant against what he calls “fairy tale physics,” which seems to include any area of speculation in physics that does not currently have any experimental or observational support, including disparate stuff like string theory, M-theory, supersymmetry , many worlds, multiverses, etc.
It’s hard to understand the motivation behind the book. He is unhappy that some physicists write popular books about their speculations and make money. He admits that he lacks the mathematical training to understand the theories he criticizes, but nevertheless condemns those that pursue these speculative areas that are beyond his reach. Is there anything to this book other than being a crackpot rant?
 

Sorry, I have an borderline-irrational aversion to arguments presented on YouTube. Nothing personal.

And I've answered repeatedly: management techniques don't "do" physics, they alter how work is done. Application of management science alters how, when, and of what quality of work is done.

So at some point, a physicist has to do some work. And they have to do it differently (because of your management) than they would have done normally (i.e., with the existing DOE/NSF management). And you, the manager, have to come up with instructions for what to do, and those instructions have to be concrete and intelligible, and the outcome of following-your-instructions have to be good.

This is the point where we disagree. Insofar as you've offered examples of your management's actual instructions to physicists, those instructions sound like they're going to fail.

You picked up a random history-of-science/cog-sci concept. You guessed that this concept can be spotted among current research threads. You guess that a manager could opt to prioritize "research that replaces objects with processes" and that this is likely to lead to revolutions. I think every bit of this is wrong. I don't think you, or anyone else, knows how to spot "object/process" differences except using 20/20 hindsight. I don't think that whatever the next physics breakthrough turns out to be will be recognizable ahead of time.

On the other hand, does the effort, cost, and quality of our efforts to resolve problems which have continued for more than 100 years suggest nothing is amiss? Businesses would NEVER tolerate such ineffectual investments.

You have a very bizarre view of the history of science. What "problems have continued for more than 100 years"? I really don't know. If I look back at what we knew about physics 100 years ago---virtually nothing---I see a long series of questions answered, prompting new questions, which were answered, etc.. Of course the remaining questions are difficult ones, but that's how science works.

What 100-year-old question are you thinking of? The lack of FTL travel? (That may just be an ironclad law of the Universe---and the discovery and testing of this law is a success, not a failure.) The lack of a confirmed quantum-gravity theory? (Did you realize that the LHC recently successfully tested a number of important quantum-gravity hypotheses? Should we not have done that? It was a small part of an expensive multipurpose project, which also tested hypotheses like the Higgs mechanism, which turned out to be true---should we not have done that?)
 
I recently read Jim Baggott’s Farewell to Reality. First, the book provides an overview of the state of knowledge of modern physics (more than half the book). Then, the author goes into a rant against what he calls “fairy tale physics,” which seems to include any area of speculation in physics that does not currently have any experimental or observational support, including disparate stuff like string theory, M-theory, supersymmetry , many worlds, multiverses, etc.
What would he have considered "fairy-tale physics" in the past? In the 19th cy., would he have slammed atomism as "fairy-tale physics"?

It’s hard to understand the motivation behind the book. He is unhappy that some physicists write popular books about their speculations and make money. He admits that he lacks the mathematical training to understand the theories he criticizes, but nevertheless condemns those that pursue these speculative areas that are beyond his reach. Is there anything to this book other than being a crackpot rant?
More like unjustified kvetching to me. Something which Alexander Unzicker and Farsight also do, but JB and AU don't have alternative theories the way that Farsight does.
 
Looking further back, in early modern times, a common dirty word for farfetched theories was "occult qualities". Sir Isaac Newton's supporters defended the idea of a force of gravity from Gottfried Leibniz's charge that gravity was an "occult quality" (Newton's Philosophy (Stanford Encyclopedia of Philosophy)). Newton himself famously wrote about gravity that "I don't make hypotheses" (Hypotheses non fingo) about its nature.

So Newton's critics were calling gravity fairy-tale physics.
 
Looking further back, in early modern times, a common dirty word for farfetched theories was "occult qualities". Sir Isaac Newton's supporters defended the idea of a force of gravity from Gottfried Leibniz's charge that gravity was an "occult quality" (Newton's Philosophy (Stanford Encyclopedia of Philosophy)). Newton himself famously wrote about gravity that "I don't make hypotheses" (Hypotheses non fingo) about its nature.

So Newton's critics were calling gravity fairy-tale physics.

http://en.wikipedia.org/wiki/Hypotheses_non_fingo

So what? Sure, some were proposing that gravity had an "occult quality" (due to its mysterious nature at the time) and what Newton was writing about (and against), in the quote you just took one little part of, was such baseless speculation and how he would not engage in it. Preferring instead a particular "experimental philosophy" where "propositions are inferred from the phenomena, and afterwards rendered general by induction." They were simply saying that their own ideas about gravity, by the "occult quality" they ascribed to it, where "fairy-tale physics". Crackpot physics has been around a lot longer than there has been actual experimental physics. The apparent drive for such remains much the same.
 
Last edited:
...
...

More like unjustified kvetching to me. Something which Alexander Unzicker and Farsight also do, but JB and AU don't have alternative theories the way that Farsight does.

Concerning terminology: I prefer to think of Duffield's assertions as "alternate (crackpot) notions." I would reserve the term "alternate theory" for some conjecture that has a mathematical basis and some possibility for experimental confirmation.
 
On the other hand, does the effort, cost, and quality of our efforts to resolve problems which have continued for more than 100 years suggest nothing is amiss? Businesses would NEVER tolerate such ineffectual investments.
What problems? The apparent incompatibility of QM and GR -- grand unification? Something else? The ultimate nature of time, space, matter, etc. has occupied mankind for thousands of years. Nothing is "amiss." It seems all we can do is learn and understand the behavior of the knowable universe, but not its ultimate nature. Our models of reality have become increasingly complex and deep but reveal little about its ultimate nature. Perhaps our descendants will unravel some of these mysteries in the future -- but probably not.
In any case, it extremely naïve to believe better management techniques would make any difference. I would concede that perhaps an accomplished theoretical physicist with concurrent management technique training might avoid some dead end and save some time, but I find even that unlikely.
 

Back
Top Bottom