• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Merged Puzzling results from CERN

Have not any response to my above post, and I admit it is a hunch based on limited knowledge of Einstein, but my question remains.

The New Scientist article states that "The researchers also accounted for an odd feature of general relativity in which clocks at different heights keep different times." http://www.newscientist.com/article/dn20961-fasterthanlight-neutrino-claim-bolstered.html. This deals with Gravitational time dilation.

Were the same applied to time dilation based on the varying linear speeds of the two points i.e. Gran Sasso and Cern - http://hyperphysics.phy-astr.gsu.edu/hbase/relativ/tdil.html (some 21.7 m/s)?

And if not included, could this not have contributed to these surprising results?

Anders - no offense but please do not respond.
Even I'm aware of gravity's effect on time. I'm sure the professionals took this into account.

Is there something that makes you think otherwise? I'm sorry I can't tell you even if they forgot about this, what effect it would have on their results.
 
Even I'm aware of gravity's effect on time. I'm sure the professionals took this into account.

Is there something that makes you think otherwise? I'm sorry I can't tell you even if they forgot about this, what effect it would have on their results.

Gravity yes, but the difference of speed is what I am asking about, i.e. the difference in linear velocity of said two points, and the effect it may have on time dilation.
 
Last edited:
Gravity yes, but the difference of speed is what I am asking about, i.e. the difference in linear velocity of said two points, and the effect it may have on time dilation.
.
Until the actual time measurement error is found, the only other rational explanation is the detector at the far end was washed with a homeopathic solution of thiotimoline.
 
You can read the actual paper that is inviting scrutiny here in case it hasn't been posted yet: http://arxiv.org/abs/1109.4897

Not sure exactly why people are up in arms about it. Only a matter of time (pun intended) given the history of science. There's always going to be another level in Alice's rabbit hole. The arrogance of the modern scientific mind though is one that habitually believes they are the generation that hits the bottom floor of any subject.

Would suggest reading Feyerabend's "Against Method" for just a tiny bit of insight into modern scientific hubris.
 
Ok, the researcher's video presentation @ CERN is now online:

https://mediastream.cern.ch/MediaArchive/Video/Public/WebLectures/2011/155620/155620-podcast.mp4

In my opinion, the research team shows great scientific attitude, they understand that the stakes are very high, they have done all they can and have not found mistakes, now they are turning to the rest of the scientific community to find the potential flaws in their results. Please take a good look at this presentation if you are interested in the issue.

The Q&A starts at about 1:03:45.
 
Gravity yes, but the difference of speed is what I am asking about, i.e. the difference in linear velocity of said two points, and the effect it may have on time dilation.

That's well known, and it's at the basis of the GPS satellites.

But here's the kicker. The distances measured for the experiments are based on GPS measurements in the first place. So if there is a conclusion that the neutrinos are actually going faster than c, then it would mean that there is something wrong with relativity, if only a measurement in the speed of light. Either way, it would mean that the original measurements used to form the conclusion are wrong anyway.
 
Because both reference points experience the same amount of error in time?

I see what you are saying. I would like someone to explain to me, without ridicule and referral to homeopathy :) why my following question is wrong?

At Cern, the circumference of the earth is approx 27720 km, at Gran Sasso 29,590 km.

This means a difference of 1870 km and based on the earth's rotation of 24 hours, a linear velocity difference of 21 m/s.

Since we are talking about light speed experiments, does it not follow that at the time of the 'firing' both points i.e. Gran Sasso and Cern can be seen in terms of moving in a linear way, and not an angular one.

If yes, can Lorenz time dilation equation apply to this difference in velocity? And if it can, should it be considered in the calibration of equipments?
And if it should, it is a valid request to ask if it was done?

That is all that I am asking.
 
You can read the actual paper that is inviting scrutiny here in case it hasn't been posted yet: http://arxiv.org/abs/1109.4897

Not sure exactly why people are up in arms about it. Only a matter of time (pun intended) given the history of science. There's always going to be another level in Alice's rabbit hole. The arrogance of the modern scientific mind though is one that habitually believes they are the generation that hits the bottom floor of any subject.

Would suggest reading Feyerabend's "Against Method" for just a tiny bit of insight into modern scientific hubris.

Would you like to make a bet on whether neutrinos actually travel faster than light? I'll bet against that at almost any amount.
 
Sol, I have been surprised to see your taking what looks like "this can't be true" as a position.

It might, or might not.

I'm not saying it can't be true. I'm saying it's not true, and that I'll put my money where my mouth is and take bets to the contrary. So far, no takers.

This is something that would overturn a good part of 20th century physics, including some of the best-tested theories in the history of human thought. It's extremely unlikely to be right a priori - and the more I look at it, the more it smells.

They didn't need to go so public with this - they could have just gone around and given some low-key talks, collected feedback, digested it, and then decided what to do. Personally, I think that would have been the right choice.
 
I'm not saying it can't be true. I'm saying it's not true, and that I'll put my money where my mouth is and take bets to the contrary. So far, no takers.

This is something that would overturn a good part of 20th century physics, including some of the best-tested theories in the history of human thought. It's extremely unlikely to be right a priori - and the more I look at it, the more it smells.
They didn't need to go so public with this - they could have just gone around and given some low-key talks, collected feedback, digested it, and then decided what to do. Personally, I think that would have been the right choice.

I'm with Sol on this one. The more I look at it, especially considering the new criticism of the CERN-OPERA group's statistical analyses, the more I think that, sadly, we could be looking at another cold-fusion fiasco.
 
Last edited:
I don't know if anyone has posted this yet, but here is an interesting critique (by John Costella, a particle physicist in Melbourne) of the statistical analysis performed by the CERN-OPERA team which seems to cast considerable doubt on their FTL claims.

ETA2: It seems that Meridian and others may have already noted this issue.

Actually, this particular critique is total nonsense. It assumes that if you are fitting the pulses against each other, the best (lowest statistical error) way to do it is to match up the means. That's true for a normal distribution, but not for other distributions.

(This is easy to see: imagine an idealized example with no other sources of error, where the emitted pulse has exactly a square-wave shape, and there is no background etc. Then you know that the first neutrino received came from somewhere within the pulse, and so did the last one. With n samples, the first and last will be roughly pulse length/n away from the actual end, so you'll get the offset pinned down to within this error. This is much better than the statistical error in the difference of the means, which will scale with sqrt(n).)
 
Actually, this particular critique is total nonsense. It assumes that if you are fitting the pulses against each other, the best (lowest statistical error) way to do it is to match up the means. That's true for a normal distribution, but not for other distributions.

But the author of the critique stated that this analysis of the data could be viewed as a normal distribution...
Now, each of the individual neutrinos that happen to be detected will have a delay (from the start
of each burst) that follows this distribution. The shift in the travel time can be estimated by simply
computing the mean travel time. The statistical distribution of the sum of the 16,111 arrival times could be obtained by convolving the distribution with itself 16,111 times, but we know both intuitively and by the Central Limit Theorem that it will approach a Normal distribution, with a mean and variance
that are each 16,111 times the mean and variance of the above distribution; dividing by 16,111 to
compute the average, the variance of the result will be 1/16,111 of the variance of the distribution.
Now, the variance of a uniform distribution of width 10,500 nanoseconds is (10,500)2 /12=9,187,500 ,
so the variance of the calculated mean will be about 9,187,500/16,111≈570 , and so the standard
deviation of the mean will be about √570≈24 nanoseconds.

Is there something I'm missing here? I am not a particle physicist so I'm not familiar with the methods they use to analyze their data sets. If this critique is off, is there a better one on the statistics issue?
 
Last edited:
The mean arrival time will approach a normal distribution, but that's not the point. The point is that you have n samples from a known (in this simplified approximation) distribution offset by an unknown random amount. You _could_ take the mean of the samples, which will be very close to a normal distribution, and use that to estimate the offset. But you don't _have_ to - there may be a better way using all the data.

And there is: suppose the distribution is uniform on [0,1] and zero outside, and you have a thousand samples from the offset distribution, with the smallest being 0.234 and the largest 1.233. Then you _know_ that the offset is between .233 and .234 (since all samples have to both fit in the shifted distribution). It's easy to check that n samples from a uniform distribution will typically cover all but order 1/n of the range, so these numbers are realistic for this toy question.

ETA: I've not seen a good critique on the statistics; I suspect that within a certain statistical model they are doing the calculations correctly (or roughly so - some things are odd - in their monte-carlo bit, there's no need to fit a curve there; just perform 100000 runs or whatever). But the real statistical question is how good the model is, as I've tried to say before. If you find some discussion of this, please let me know where!
 
Last edited:

Back
Top Bottom