• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Merged Puzzling results from CERN

That doesn't show many questions about possible problems with the statistical fitting (perhaps some near 17:41); that definitely seems to be the weak point!
 
Actually, thinking a bit more about the point Sol raised: it seems to me that there's a potential problem purely from the statistics, even without some change in the effectiveness of neutrino production during each pulse.

As I understand it, what's going on is roughly as follows: neutrinos are generated in 10000 ns long pulses; since the detection rate is so low, I assume there are LOTS of these pulses, with usually no neutrinos actually detected, but occasionally 1. Since, when a neutrino is detected, there is no way of knowing when during its 10000 ns pulse this particular neutrino was generated, the idea is to pool data from all the pulses. It's apparantly clear enough which pulse a particular neutrino came from, so having set up the synchronized clocks etc, and assuming that all works, you plot the arrival time, relative to the start of its pulse (or rather, to some definite event associated to the pulse), for each neutrino, getting an overall `arrival curve'. Then you work out the `generated curve' of the overall distribution of times when neutrinos are generated.

Assuming that the neutrinos all travel at the same speed, the arrival curve will just be a time-shifted form of the generated curve, with statistical noise (since not so many neutrinos arrive). Knowing the generated curve, you can then find the best fit for the time shift, and use statistics to work out the uncertainty.

This seems extremely optimistic to me, given that there are surely lots of potential problems when the pulse is 10000 ns long and you are looking at a 60 ns result.

Firstly, the curves don't seem to be measured in the same way (since there is no neutrino detector at CERN) - they use the distribution of protons (?) going into the thing that makes neutrinos as a proxy. As Sol points out, it could be that the curve of generated neutrinos is slightly different from the proton curve for some reason, in which case they'll be fitting two different curves, and will get the wrong time offset.

But I think there could be a problem even if the proton and generated neutrino curves are identical: it seems that the curve varies a bit from pulse to pulse (they say it changes over the course of the experiment). It seems to me that one should compare the overall distribution for generated neutrinos against the distribution for detected ones, but as I understand it, they don't do this: they only take the (proton) curve at the CERN end for those pulses where a neutrino is detected. If the pulses vary in total number of neutrinos, this is a `size biased' sample (you are more likely to include a large pulse, since it is more likely to lead to a detection). If the larger pulses have on average a different shape from the smaller ones, the average they seem to be taking will have a different shape from the true one, and so again the fitting will be matching up curves of different shapes and so will give the wrong offset.

Overall, it seems that (as so often) the problem is in the statistics, or rather the assumptions going into it: if you know you have two curves one of which is precisely a randomly sampled time-offset version of the other, then yes you can use statistics to work out the statistical error in the best-fit offset. But the question is whether the curves are really related like this. Given that they are measured in different ways, what possible changes in the shape of the curve does this introduce, how big might these be, and how much would this affect the estimated offset? Not that one can see that the result is wrong, but all this discussion seems to be missing from the paper, so (from my superficial reading) the paper as it is doesn't give much reason to think the result should be right.

[I think you can correct for this particular size-biasing effect, while still using only pulses corresponding to detected neutrinos, by `undoing' the size-biasing. I'm not saying they haven't done his, or haven't simply checked that it's not important anyway; but there is no evidence in the paper itself that they have.]

Hopefully this isn't all nonsense - it's late here!
 
[I think you can correct for this particular size-biasing effect, while still using only pulses corresponding to detected neutrinos, by `undoing' the size-biasing. I'm not saying they haven't done his, or haven't simply checked that it's not important anyway; but there is no evidence in the paper itself that they have.]
During the press conference they mentioned that they've checked all obvious possible causes.

What if the operation of LHC created unintended mini wormholes that the neutrinos go through at c but the distance traveled is shorter? I wonder when the sci-fi guys enter the hoopla.

(During similar experiments in Japan, some neutrinos atually changed their flavor during the trip. Peculiar little suckers, aren't they?)
 
Last edited:
A lot of people are focussing on possible errors on the travel distance, but there's another important distance that's just as important - cables. Electrical signals take time to travel, and in experiments where such short times are being measured, cable lengths are absolutely critical. Obviously I'm not suggesting they haven't thought of this, since presumably they know what they're doing, but it's a very easy area for errors to creep in. I know we've had issues before with timing not matching up properly because one cable out of thousands wasn't quite the right length.

So, they should be using super duper gold plated monster magic audio cables?
 
Of course your main point stands, but isn't this bit wrong? Say v1=(c,0,0), v2=(-c,0,0) in some frame. In a frame whose relative velocity is in the y direction, the photons' velocities have an equal (and non-zero) y-component. So the photons aren't moving in opposite directions, and the magnitude of their relative velocity is less than 2c. In other words,`moving directly towards each other' is not Lorentz invariant.

(The only invariant relative velocity is 0, of course.)

Yes, you're quite correct. My mistake.
 
Actually, thinking a bit more about the point Sol raised: it seems to me that there's a potential problem purely from the statistics, even without some change in the effectiveness of neutrino production during each pulse.

That's a good point. Without totally changing the design of the experiment I can think of two ways to deal with that (and the issue I raised). The simplest is to use a few nanosecond pulse of protons rather than a 10,000 ns pulse. The other is to change the baseline distance somehow and show that the delay is linear in it.

It also might help to measure the proton pulse more directly, for instance by detecting some of the pions.
 
I believe that 1 billionth of a second equals .3 meters.

That leaves a lot of room for artifact after going through 742 kilometers of rock, and buried alien artifacts:)

How big was the detection chamber?

Were any neutrinos flagged for getting loose before the clock started?

Where's a cop with a southern drawl when you need one?
 
How did they determine the distance between the two points. Since they are not in line of sight they would have to make a lot assumptions about where they really were, relative to each other.

A neutrino is either a very fast particle or a photon.

What would you race it against, it is one of the few human generated particles that can move through that much rock.

The slightest error in position or timing, would be difficult to cross check with another photon.

So they determined the path length some how, and then by a separate path determined the start signal.

GPS's are not good enough to set transport coordinates.
 
I’m sorry if this has been mentioned, as I haven’t read every single post so far!
From the point of view of someone who’s not a high powered scientist, I thought of two possibilities - I apologise if the answer is obvious to you experts!
(1) Wouldn’t the neutrino’s trajectory be altered by the earth’s rotation, and other rotations?
(2) Hasn’t it been theorised that, due to creation of space, the distant galaxies might be travelling greater than c.
Perhaps this is an effect which is observed more as the distance over which the velocity is measured, increases (?)
 
Well... regarding the neutrino results, it's going to require a re-design of the experiment to convince me they can measure the time accurately. Using 10,000 nanosecond pulses to measure a 60 nanosecond time shift is a bad idea for all sorts of reasons, several of which have been identified in this thread.

As for Anders, I think he's just trolling.
 
I was thinking of posting more or less the same thing. There are just too many potential problems with the statistics, it seems to me. (For example, even suppose you had another similar detector much closer, and compared results from the two. I presume the neutrino beam is not perfectly collimated: it could be that one detector sees more of it than the other, and it could be that the direction of emission and time within the pulse are correlated.)

As a friend pointed out, if you could modulate the beam well enough, very little data would be needed to get a much better result. Suppose (using some blocking/switching device) you could break the 10000ns pulse into 100 10 ns pulses with 90 ns gaps between (discarding 90% of the protons/neutrinos). If the background really is as low as they say, you'd only need a few (maybe just 2) neutrino detections to see clearly whether the signal is as expected, or ahead by 50 ns (plus or minus an unknown multiple of 100 ns, but who cares). You'd still get 10% of the data rate, so 500 detections a year, i.e., just a few days data could be enough.

But I imagine this is not so easy to set up; and I think the present paper (unless I'm missing something) is so weak that it doesn't provide a reason to try.
 
As a friend pointed out, if you could modulate the beam well enough, very little data would be needed to get a much better result. Suppose (using some blocking/switching device) you could break the 10000ns pulse into 100 10 ns pulses with 90 ns gaps between (discarding 90% of the protons/neutrinos). If the background really is as low as they say, you'd only need a few (maybe just 2) neutrino detections to see clearly whether the signal is as expected, or ahead by 50 ns (plus or minus an unknown multiple of 100 ns, but who cares). You'd still get 10% of the data rate, so 500 detections a year, i.e., just a few days data could be enough.

Yes - that's the kind of thing I had in mind.

But I imagine this is not so easy to set up; and I think the present paper (unless I'm missing something) is so weak that it doesn't provide a reason to try.

Is it hard to modulate the proton bunches like that? In any case my impression is that this experiment doesn't want to mess with that, because their goal is to maximize neutrino luminosity and look for oscillations, not this effect.

As for it being so weak- they claim it's a 6-sigma effect. I don't think that can just be dismissed out of hand. I'd like to see the statistics re-done more carefully - it's only the events near the rising and falling edges that should really matter, but that didn't seem to be taken into account properly. Once it is, the significance might drop precipitously.
 
Too bad they can't reflect the neutrino beam back to the origin, with its clock, instead of having to correlate two timing systems.
It's give them twice the time to measure.
 
As for it being so weak- they claim it's a 6-sigma effect. I don't think that can just be dismissed out of hand. I'd like to see the statistics re-done more carefully - it's only the events near the rising and falling edges that should really matter, but that didn't seem to be taken into account properly. Once it is, the significance might drop precipitously.

I suspect that the calculation within the model they assume is correct (though there isn't enough detail to tell), and there 6 sigma is enough. But the problem is how accurate that model is (i.e., how close their pdf is to the actual neutrino distribution), and there doesn't seem to be anything quantitative about the precision there.

So I think the presentation is weak until the second question is addressed; maybe they've done this, but have chosen not to discuss it for some reason. Basically, it's saying `if we assume X, then it's a 6 sigma result'. So the question is can they assume X?

And I agree that it seems to be the tails that matter - I think that's addressed automatically within their statistical model, but the question is how good their estimate of the pdf is in the tail. (Which leads to even more potential problems: e.g., if their synchronization works 99% of the time, that might not be good enough.)
 
I think this is all completely fascinating.
I wonder - could the experts here hazard a guess as to how long it just might take to find an answer to the faster than light question?
 
I think this is all completely fascinating.
I wonder - could the experts here hazard a guess as to how long it just might take to find an answer to the faster than light question?

No one is going to believe it until it's repeated elsewhere. And even then, extraordinary claims require extraordinary evidence, and this is a very hard experiment to do well.

A likely scenario is that:

1) Some serious criticisms of the result will get aired, perhaps along the lines of the ones we are discussing here, which will give most of the community an excuse to forget about this.

2) The OPERA experiment will try to correct for those critiques, and/or another experiment (Minos for instance) will attempt to replicate the experiment.

3) The effect will go away and be forgotten.

I wouldn't be surprised if that sequence is complete within a year.

If anyone thinks that instead it will be confirmed that neutrinos go faster than light, I've got a large sum of money I'm will to put up for a bet.
 
Just my thought, and I am happy to be wrong.

Although both places will have the same angular velocity, their linear miles per hour covered would vary - Gran Sasso, Italy traveling faster, being closer to the equator.

Based on my limited understanding of Einstein i.e. that for a faster moving object, time slows. Could that difference in speed have the resulting effect of a 'time lapse'?

Have not any response to my above post, and I admit it is a hunch based on limited knowledge of Einstein, but my question remains.

The New Scientist article states that "The researchers also accounted for an odd feature of general relativity in which clocks at different heights keep different times." http://www.newscientist.com/article/dn20961-fasterthanlight-neutrino-claim-bolstered.html. This deals with Gravitational time dilation.

Were the same applied to time dilation based on the varying linear speeds of the two points i.e. Gran Sasso and Cern - http://hyperphysics.phy-astr.gsu.edu/hbase/relativ/tdil.html (some 21.7 m/s)?

And if not included, could this not have contributed to these surprising results?

Anders - no offense but please do not respond.
 
It's impossible. Or rather if it's possible, the whole notion of "mass" needs to be re-examined.
Which means lot of work for researchers and theoreticians. Not necessarily a bad thing.
They have already tested it during two or three years. And they still don't want to publish it. Because they are afraid.

What more do you need to except the truth that E=mc^2 is falsified as being pseudoscience. As nonsense.
Nonsense? No. In need of either revision or a scaling constant? Possible. This isn't a "baby out with the bathwater" moment.
We read:

"Over 3 years, OPERA researchers timed the roughly 16,000 neutrinos that started at CERN and registered a hit in the detector. They found that, on average, the neutrinos made the 730-kilometer, 2.43-millisecond trip roughly 60 nanoseconds faster than expected if they were traveling at light speed."

16.000 experiments.
OK, if true, at least this is using a lot of data points. (But what if not all neutrinos are created equal? What if the variance in their nature is a bit larger than had been previously understood?
I think there is a cognitive bias not to accept evidence of the contrary in science.
A "how can this be" reaction is understandable. This is new territory, if it all stands up. I think is why they are so keen for a second set of eyes/second set of protocols, to take a good hard look.
What more prove do you need to except that a part of the scientific worldview till know was based on pure nonsense.
Given its utility is solving quite a few mundane problems, like manufacture of nuclear weapons and nuclear reactors that actually work, not quite "pure nonsense."
The math failed. E=mc^2 is at the same level as homeopathic ideas. It's nonsense.
16.000 experiments, and still you can not except the truth?
I think Einstein was wrong.
Wrong about what? This may be yet another scaling problem. (See also how things at the boundary layer are different, in fluids).
I am not a physicist, but I thought the whole point of SR is that there is no 'rest.'
Poor photons. Never get to sit down for a cold beer, and complain about how tired they are after all that scurrying about at light speed. :(
Try to imagine all life as you know it stopping instantaneously and every molecule in your body exploding at the speed of light.
If you try some of my four alarm chili in a few weeks, is that a close enough approximation for your purposes?
Question: don't you find it a little disturbing that I am teaching you some of the most basic facts about SR, a theory you claim to have disproven?
That made me chuckle.
Einstein's theory is based on the observation that light travels at the same speed for all reference frames.

A theory explains a fact. You can't say that Einstein's theory is wrong that light travels at the same speed regardless of reference, when the purpose of the theory was to explain the fact that light travels at the same speed regardless of reference.
So long as you aren't using light as your medium of measurement ...
I completely agree with Sol Invictus. I know this post therefore looks a little superfluous, but I just couldn't help butting in and offering my 2c as well.
Chortle.
There was a pretty reasonable piece on the bbc radio news. a physicist, who's name I didn't catch described her interpretation of the arvix paper and the significance of the result, were it show to be true.
* the distance measurement seems pretty solid.
* she suspects error in synchronizing the two clocks
* breaking the c limit means breaking conservation of energy

Can't wait for the free energy dudes to latch onto that last one :)
And the Oil Companies to suppress it! :eek: (oops, tinfoil hat at the laundry)
Even if neutrino's are going at the speed of light and not faster, it's not consistent with the theory that only massless objects can go that fast.

I wonder what the implications are if this turns out to be true.
For other theories in science.
At what scale?
I think these neutrinos are just in a giant hurry to get to one of Berlusconi's "bunga bunga" parties while they're still going on.
Hot Italian babes are at stake. Do you blame them?
Because I said that Einstein's theories were wrong before CERN made this announcement. So it confirms my earlier statements. The conspiracy is to protect the hoax theory of relativity, not to disprove it as CERN is doing now.
CERN is, looking for a second opinion, and either confirmation or "ah, that's where the error was." That's good science.
Do we have any rational basis for saying what a "c violation" would look like since all the theories we have assume it can't happen?
Good question.
Meridian said:
Assuming that the neutrinos all travel at the same speed, the arrival curve will just be a time-shifted form of the generated curve, with statistical noise (since not so many neutrinos arrive). Knowing the generated curve, you can then find the best fit for the time shift, and use statistics to work out the uncertainty.
<-snip rest of a very nice post->
Is the nature of, and possible presumed likelihood of the "sameness" of neutrinos to one another a possible source of experimental error?
(During similar experiments in Japan, some neutrinos atually changed their flavor during the trip. Peculiar little suckers, aren't they?)
Aye. Maybe we don't understand them as well as we think. (Well, not me, but people whose specialty is neutrinos at work and at play ..) .
No one is going to believe it until it's repeated elsewhere.
If anyone thinks that instead it will be confirmed that neutrinos go faster than light, I've got a large sum of money I'm will to put up for a bet.
Sol, I have been surprised to see your taking what looks like "this can't be true" as a position.

It might, or might not.

I admire the CERN team for their appeal to another team to see if they can reproduce their results. Strikes me as good science. Their entering assumption seems to be "can we have measured this right?" and they wish for assistance in ensuring that the know what they know, or assistance in finding source or scale of error not previously factored in. (Enjoyed your and Meridian's points on how one sets up the experiment, and how one generates neutrinos. Gracias).

Given how reliable most physics and math have shown to be to date, in terms of producing reproducible results, what may be underway here is finding out:

some things that operate differently at certain scales (this would not be the only one)

something new about neutrinos not previously understood

some things I can't even put into words since I haven't sufficient background

Keen to find out how they come to peace with these results.

Good news, I'll say again, on the side of "this is correct" is how much further research will be needed to find out what this result applies to elsewhere.

For research scientists, I think that spells ... FUNDING! :cool:
 
Last edited:

Back
Top Bottom