• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

The Universe is Deterministic

So then the annihilation of a virtual positron and real electron can produce only one photon?

Yes. It can also produce zero if there's a photon in the initial state.

Well then since the processes “we're discussing” all depend upon virtual particles, in this case virtual electron positron pair production, what then causes that specific virtual pair production?

A quantum fluctuation. That diagram is a way of describing the uncertainty in the number of particles present in the vacuum.

Certainly I do not see the fact that the article and specifically the text I quoted was changed just this morning as being ‘acausal‘.

If you're implying that I changed it, I didn't (although perhaps someone else that read this did?). I was just asking you were you got that quote.

Nor do I find your claim that you “couldn't find the text you quoted anywhere” to be significant since you could have simply looked at the pervious version even after this mornings revision.

http://en.wikipedia.org/w/index.php?title=Electron&diff=313168826&oldid=313131522

I should have looked at the history. Sorry.

Or you could even just view the reference cited (#77) by this mornings revision and the previous version of that article. On the top of page three the diagram to the right is precisely the interaction I was referring to as



And that reference gives the significance of this self energy contribution much the same as the wiki article before this morning's revision.

I scanned that reference, and I agree with everything I read. Is there part of it you wanted to discuss?

Indeed nothing has changed from when I first mentioned that particular interaction even if some wiki article has, but thanks anyway, Sol, for helping me find the diagram I was referring to.

You're welcome, I guess...
 
Yes. It can also produce zero if there's a photon in the initial state.

A virtual photon exactly like that which would result from such a virtual positron real electron annihilation?

A quantum fluctuation. That diagram is a way of describing the uncertainty in the number of particles present in the vacuum.

Undoubtedly, but what causes that quantum fluctuation and thus “the uncertainty in the number of particles present in the vacuum”?

If you're implying that I changed it, I didn't (although perhaps someone else that read this did?). I was just asking you were you got that quote.

Who changed it is irrelevant, the fact is that it was changed just this morning and only the portion of the article from which I had quoted was changed, for which I had provided a direct link to that specific section of the article.


I should have looked at the history. Sorry.


I scanned that reference, and I agree with everything I read. Is there part of it you wanted to discuss?



You're welcome, I guess...

No problem, the important thing is that we can now discuss the interaction I was referring to and the implications of such quantum fluctuations for both causality and determinism.
 
More interesting to me from a theoretical and philosophical standpoint is are the dimensionless constants (such as the fine structure constant) computable. Also, is the total energy of the universe computable?
 
A virtual photon exactly like that which would result from such a virtual positron real electron annihilation?

Sure.

Undoubtedly, but what causes that quantum fluctuation and thus “the uncertainty in the number of particles present in the vacuum”?

In this case there really isn't a cause, because there wasn't really an event. Those diagrams don't represent real events, hence "virtual" particle. They represent possible events, or possible "paths", hence "path" integral. In QM, leaving aside the issues of decoherence and measurement for now (they aren't relevant to this), all possible events happen, each weighted by a certain phase.

That diagram allows you to compute the phase attached to a certain possible event, or actually a certain set of possible events.

No problem, the important thing is that we can now discuss the interaction I was referring to and the implications of such quantum fluctuations for both causality and determinism.

OK.

Regarding causality, in relativistic QFT the question is decisively settled by the fact that the field is quantized in a way that ensures that operators always commute when outside the lightcone. In classical field theory the equivalent statement is that the equations of motion are hyperbolic with a causal cone that coincides with the light cone.

I can explain the physical meaning of either of those statements if requested.

As for determinism, QFT as a theory of particle physics is only consistent with the MW interpretation. But given some mysterious additional ingredient, I suppose large systems could somehow behave according to Copenhagen. Transactional and Bohmian are incompatible with QFT at the microscopic level - and that's the level that's been tested to unprecedented accuracy.

More interesting to me from a theoretical and philosophical standpoint is are the dimensionless constants (such as the fine structure constant) computable.

That's a good question. No one knows the answer. I'd say the current consensus is "probably no for many, but yes for a few".
 
Last edited:

Well then why specifically did you claim of the interaction I described…

That's not a valid process.

Perhaps you could even give your own detailed interpretation of the interaction shown in the diagram to the right at the top of page three in the paper linked below.

http://arxiv.org/PS_cache/arxiv/pdf/0709/0709.3041v1.pdf

In this case there really isn't a cause, because there wasn't really an event.

You evoked quantum fluctuation as the cause of virtual pair production, now “there really isn't a cause, because there wasn't really an event” thus there should be no consequences from those non-event virtual pair productions, interactions or quantum fluctuations (like the magnetic moment of the electron) since “there wasn't really an event”.


Those diagrams don't represent real events, hence "virtual" particle. They represent possible events, or possible "paths", hence "path" integral. In QM, leaving aside the issues of decoherence and measurement for now (they aren't relevant to this), all possible events happen, each weighted by a certain phase.

You seem to be contradicting yourself now before “there wasn't really an event” but now “all possible events happen” including those virtual events and quantum fluctuations for which you claimed “In this case there really isn't a cause”. You seem to be arguing for the dependence on acausal virtual events.

That diagram allows you to compute the phase attached to a certain possible event, or actually a certain set of possible events.

Resulting in the probability of some event like the detection of a traveling electron, hence the probabilistic nature of QM and QFT.

That’s the real rub of it Sol strict determinism is an unbroken chain of causality from the past to the present and onto the future. To abandon strict determinism we must break that direct causal chain somewhere along the line. Now probabilistic causation retains the central aspects of causality by introducing such probabilities as a cause that simply increases the probability of some event. These probabilities can be the result of a lack of knowledge of the current state or the result of other factors not fully determined by the current condition (say like a genetic predisposition triggered by some future environmental conditions) and thus the result to at least some extent having an acausal dependence with respect to those current conditions.


OK.

Regarding causality, in relativistic QFT the question is decisively settled by the fact that the field is quantized in a way that ensures that operators always commute when outside the lightcone. In classical field theory the equivalent statement is that the equations of motion are hyperbolic with a causal cone that coincides with the light cone.

I can explain the physical meaning of either of those statements if requested.

By “commute” do you mean that they all take on the same value and would that value happen to be zero? Anyway that only makes relativistic QFT local and simply being local in no way decisively settles the question of causality as exemplified above.

As for determinism, QFT as a theory of particle physics is only consistent with the MW interpretation. But given some mysterious additional ingredient, I suppose large systems could somehow behave according to Copenhagen. Transactional and Bohmian are incompatible with QFT at the microscopic level - and that's the level that's been tested to unprecedented accuracy.

Strict determinism is inconsistent with QFT unless one can find a cause for quantum fluctuations. If the “unprecedented accuracy” you are referring to is the agreement of the calculated value for the magnetic moment of the electron with the measured value, that magnetic moment is only due to those virtual interactions and quantum fluctuations for which you claim “there really isn't a cause, because there wasn't really an event”. So that very “unprecedented accuracy” demonstrates the lack of strict determinism, the lack of understanding of some contributing factors or at least to some degree an acausal relation.




Here is a paper I haven’t quite finished reading (and probably will take me longer to digest) relevant to the issues of localization in QFT.

http://arxiv.org/PS_cache/gr-qc/pdf/9211/9211004v2.pdf
 
More interesting to me from a theoretical and philosophical standpoint is are the dimensionless constants (such as the fine structure constant) computable. Also, is the total energy of the universe computable?

I do not see how (in most cases) one could arrive at a dimensionless value without some computation. When ever we do some measurement there is at least some (in most cases) unit of measure or dimension associated to that measurement. For example the fine stature constant as (most basically) the proportion of the charge of an electron squared over the Planck charge squared. The Planck charge can be calculated but the electron charge is the result of measurement (at least for now). One particular exception that I know of is angle measured in radians; the radian is the proportion of the length of a circular arc segment subtended by angle over the length of the radius of that circle. If we could compute the charge of the electron then the fine structure constant would be completely computable. The problem is on what might one base that computation of the charge of the electron which itself could not be dimensionless but must have the units associated to charge. The charge of the electron or more specifically the Elementary Charge (positive charge of a proton) is considered to be a fundamental physical constant. So although it is not inconceivable to have a dimensionless unit of measure, such as the radian (since one can represent that proportion of arc to radius in the measuring tool), and certainly we could measure (or just relate) units of charge to the Planck charge resulting in a dimensionless representation of charge (which one might consider the fine structure constant to be). It is difficult to see how something like the fine structure constant could be completely computable without being based on some other dimensional measurement for even just the calculation of the elementary charge. Just the calculation of the Planck charge is dependent on at least one measured value, the Planck constant (the speed of light and the permittivity of free space both being defined constants).
 
Well then why specifically did you claim of the interaction I described…

You said:
Likewise from my understanding causality tends to become a bit blurred in virtual interactions and on the Planck scale. Particularly in the scenario where an electron encounters a virtual positron producing a photon of gamma radiation that then becomes a virtual positron electron pair, that electron then being the real one which may eventually be detected and the virtual positron being the cause of the original annihilation event.

There's nothing "blurry" about causality, even if we pretend this is a real process. Specifically, the problem is in some of your "then"s which mix up the temporal order. Here's how to describe the process in that diagram:

An electron is moving along. At some moment an electron-positron-photon triplet appears nearby due to a quantum fluctuation. A few moments later, the positron and photon annihilate with the original electron, leaving the other electron.

There's nothing acausal about that.

You evoked quantum fluctuation as the cause of virtual pair production, now “there really isn't a cause, because there wasn't really an event” thus there should be no consequences from those non-event virtual pair productions, interactions or quantum fluctuations (like the magnetic moment of the electron) since “there wasn't really an event”.

No, that doesn't follow. Think of the double slit experiment. The electron doesn't "really" go through one slit or the other, it goes through both, and neither.

Mathematically this makes perfect sense, but it's hard to say it in English. Perhaps the best is to say it goes through both at once, but that too is imprecise.

You seem to be contradicting yourself now before “there wasn't really an event” but now “all possible events happen” including those virtual events and quantum fluctuations for which you claimed “In this case there really isn't a cause”. You seem to be arguing for the dependence on acausal virtual events.

There's no contradiction. Your mistake seems to be in assuming that only one thing can happen, so that you can ask "what was cause for that to happen as opposed to something else?" But in QM in a sense all possible things "happen", and events at one point are not influenced by events at a causally disconnected point.

That’s the real rub of it Sol strict determinism is an unbroken chain of causality from the past to the present and onto the future. To abandon strict determinism we must break that direct causal chain somewhere along the line.

In the many worlds interpretation, the state of the system is fully and completely determined by its state at any moment in the past. Hence the theory is deterministic in a strict sense. It's also causal, both in the sense in the previous sentence and in the sense that nothing can go back in time, or even outside the lightcone.

But, the results of physical measurements cannot be predicted from the theory except probabilistically, because every measurement results in two experimenters, each of which obtains a different results, and from their point of view they cannot explain why they are one and not the other.

By “commute” do you mean that they all take on the same value and would that value happen to be zero?

No to both.

Strict determinism is inconsistent with QFT unless one can find a cause for quantum fluctuations.

Your notion of causality is simply too naive, because again you are assuming only one thing happens. You can't have it both ways: either you don't regard QM fluctuations as real events, in which case there is no need to look for a cause, or you regard all fluctuations as real, in which case the cause should be a cause for them all together as a set, not for one specific one. In the latter case the cause is the wavefunction.

If the “unprecedented accuracy” you are referring to is the agreement of the calculated value for the magnetic moment of the electron with the measured value, that magnetic moment is only due to those virtual interactions and quantum fluctuations for which you claim “there really isn't a cause, because there wasn't really an event”. So that very “unprecedented accuracy” demonstrates the lack of strict determinism, the lack of understanding of some contributing factors or at least to some degree an acausal relation.

Nope. See above.

I do not see how (in most cases) one could arrive at a dimensionless value without some computation. When ever we do some measurement there is at least some (in most cases) unit of measure or dimension associated to that measurement.

Just measure two quantities with the same dimensions and take their ratio. I suppose dividing them is a "computation", but it's not exactly a difficult one.

For example the fine stature constant as (most basically) the proportion of the charge of an electron squared over the Planck charge squared. The Planck charge can be calculated but the electron charge is the result of measurement (at least for now).

The fine-structure constant is directly measured.
 
Last edited:
What about the Afshar Experiment?

That experiment involved them able to measure the energy state and velocity/location of the particles at the same time, which AFAIK kind of defeats the whole Heisenberg uncertainty principle.
 
I just finished a 24 lecture Great Course CD on relativity and qm taught by a Professor Wolfson, and found it to be quite useful in grasping some of the concepts of QM. The thing that was the weirdest was not only the wave nature of particles, but that altering one state of reality such as slowing a particles spin could instantaneously cause its "twin" to respond in the same exact way no matter how far apart they become. This however was not considered to be a transfer of information because it violated relativity. But I ask you, could not a control circuit be set up where if you "twist" or "spin" or "turn on" one of the particles, the other will respond on, say , Mars to turn on a motor instantaneously? What am I missing here. Should I go back review the lectures?
 
I just finished a 24 lecture Great Course CD on relativity and qm taught by a Professor Wolfson, and found it to be quite useful in grasping some of the concepts of QM. The thing that was the weirdest was not only the wave nature of particles, but that altering one state of reality such as slowing a particles spin could instantaneously cause its "twin" to respond in the same exact way no matter how far apart they become. This however was not considered to be a transfer of information because it violated relativity. But I ask you, could not a control circuit be set up where if you "twist" or "spin" or "turn on" one of the particles, the other will respond on, say , Mars to turn on a motor instantaneously? What am I missing here. Should I go back review the lectures?

The magnitude of spin of an elementary particle is constant, it cannot be slowed down or sped up by any means, only its direction can change.

With entangled particles, you don't know the spin state of either particle until you do a measurement. The measurement of any one particle is random, but the correlation between them perfect: if you measure your half of the pair, you know what Bob's will be even if he hasn't measured it yet. But if you're on earth and he's on Mars, how do you send a message to Bob? You can measure the spin of your particle, and instantly Bob's particle's spin will have a correlated outcome, but you can't MAKE your spin have a particular outcome before measurement. So what does Bob observe? Nothing, unless he measures his spin, and when he measures his spin, it appears to be random until he talks to you, which happens at light speed. You can't send a super-luminal signal, because no result from Bob's measurement will tell him anything about what you did. He won't even be sure that you measured your particle at all until he talks to you.
 
Imagine Alice and Bob share a pair of total spin zero electrons (entangled). Bob is on Mars which lets say is 5 light minutes away. The Super Bowl is today and Bob wants to know who won before the rest of his friends (who are watching it on the 5 minute delayed TV broadcast). Bob asks Alice to measure its spin along the x axis at 0 degrees if Greenbay wins, and to measure it at 180 degrees if Dallas wins. He will measure his at 0 degrees to see try to learn the result of the game. We'll have to assume here that Alice and Bob know the game will end at a certain time say 7pm EST (which certainly isn't true for actual football games). The game ends, Greenbay wins. Alice takes her electron and measures its spin at 0 degrees. There are 2 possible outcomes for her measurement:

A) She finds that the spin of her electron matches her measurement. Bob measures his and finds his does not match.

B) She finds that the spin of her electron DOES NOT match her measurement. Bob measures his and finds his does match.

No matter what she finds, Bob will find the opposite. But since what she finds is random, what Bob finds is random (well kind of).

Bob sees that there is 5 minutes left in the game so he runs over to his electron and does his measurement and it does not match. He realizes that either of these events happened:

A) Greenbay won, Alice measured at 0 degrees and found a match.

B) Dallas won, Alice measured at 180 degrees and didn't find a match.

C) Ziggurat's suggestion - Alice fell asleep and didn't even bother making a measurement, meaning Bob just got a totally random result. We'll assume Bob knows Alice would never do this to him, so he will ignore this possibility.

Five minutes later the broadcast of the end of the game reaches Bob (and his friends), they see Greenbay win, and he realizes that condition A happened. Now Bob finally gets his message (at the speed of light, just like his friends who don't have entangled electrons). Now it seems foolish to have even bothered with the entangled electrons, since he could have just watched the game like everyone else and learned the outcome just as quickly.
 
Last edited:
Could the wave theory of matter be compared to a game of fish? I have one card or particle and its twin is one of say 100 possibilites within a field of them face down. Quantum theory seems to be saying that, sans any other information, where the twin card/particle exists cannot be determined until I guess and the cards are turned over revealing the correct location, and whether my guess is right or wrong. Until that instant the location of the card /particle - and even its existence - is nothing more than theory or statistical probabilty and nothing more. They actually don't exist until a decision or guess is made as to where they are, and only then does certainty of locale - and existence - arise. Am I going in the right direction on this?
 
You said:

I know exactly what I said.

There's nothing "blurry" about causality, even if we pretend this is a real process. Specifically, the problem is in some of your "then"s which mix up the temporal order. Here's how to describe the process in that diagram:

An electron is moving along. At some moment an electron-positron-photon triplet appears nearby due to a quantum fluctuation. A few moments later, the positron and photon annihilate with the original electron, leaving the other electron.

There's nothing acausal about that.

You mean except for the vacuum fluctuation, unless you have come up some cause yet. So the temporal order is your only complaint about how I described the interaction. As another poster noted before simply a matter of preference to maintain the assumption of causality.




No, that doesn't follow. Think of the double slit experiment. The electron doesn't "really" go through one slit or the other, it goes through both, and neither.

Mathematically this makes perfect sense, but it's hard to say it in English. Perhaps the best is to say it goes through both at once, but that too is imprecise.

But the slit really does influence (even mathematically) the possible paths of the electron. The only thing that doesn’t follow is your claim that “there really isn't a cause, because there wasn't really an event” which was followed by “all possible events happen”.



There's no contradiction. Your mistake seems to be in assuming that only one thing can happen, so that you can ask "what was cause for that to happen as opposed to something else?" But in QM in a sense all possible things "happen", and events at one point are not influenced by events at a causally disconnected point.

I have made no such assumption and your referenced statements are in direct contradiction.


In the many worlds interpretation, the state of the system is fully and completely determined by its state at any moment in the past. Hence the theory is deterministic in a strict sense. It's also causal, both in the sense in the previous sentence and in the sense that nothing can go back in time, or even outside the lightcone.


But, the results of physical measurements cannot be predicted from the theory except probabilistically, because every measurement results in two experimenters, each of which obtains a different results, and from their point of view they cannot explain why they are one and not the other.

That is specifically what makes it not strict determinism that the future is not strictly determined by the present and or past. The fact that a portion of it is deterministic in that the state vector or wave function develops deterministically over time does not make the entire theory strictly deterministic. The fact that the outcome of a measurement is not strictly determined by the present and or the past makes it in fact not strictly deterministic.



No to both.

Then you will have to explain your use of “commute” in that context.

Your notion of causality is simply too naive, because again you are assuming only one thing happens. You can't have it both ways: either you don't regard QM fluctuations as real events, in which case there is no need to look for a cause, or you regard all fluctuations as real, in which case the cause should be a cause for them all together as a set, not for one specific one. In the latter case the cause is the wavefunction.

Again I have made no such assumption and your assumption of such is simply wrong, thus so too are your conclusion and criticism.

Nope. See above.

See yourself the assumption is yours and is simply wrong as well as the “having it both ways”.



Just measure two quantities with the same dimensions and take their ratio. I suppose dividing them is a "computation", but it's not exactly a difficult one.

Well fortunately not everything has to be done with tensors.


The fine-structure constant is directly measured.

No one said it wasn’t. The point was that it could be computed from other measured values. Perhaps eventually we may have just one measured value and everything else defined from that. I could see that a lot easier then everything just defined (or computed).


I guess we are going to have to just agree to disagree on the other issues because we obviously appear to be just talking past each other.
 
Last edited:
A conundrum. I have friends hiking the Pacific Crest Trail; They are relying on me to send them a food package. But I have no information whatsoever as to where they are. If I DON'T just guess where they are and send the package they cannot exist. They are just a possibility somewhere along the PCT. But since they do not exist unless I guess, I have no reason to guess or even contemplate where they might be. Therefore they become a wave function possibility again and can or will exist as long as SOMEBODY sends them information or vice versa. They simply will no longer be in my particle world without further information, . But then I get a phone call providing me with information that a package that was supposed to be sent wasn't, and they exist exactly once again . But until I get that phone call or some kind of info providing locale, they are in the Twilight Zone. But then aren't all particles sending out information? Does that info really need to be detected in order for the wave function to collapse? Or does it merely have to be sent out
 
The fact that in the quantum world you either have to have velocity or location but never both reminds me of pi and its relationship to a circle. You can have a definite diameter but never an exact circumference and vice versa: either one but never both
 
While it may be impossible to prove, given our current understanding of physics, whether or not the universe is deterministic or not, there are no compelling reasons to therefor conclude that it isn't. All things in our experience have causes -- all of them. So why then is the prevaling opinion that the universe nondeterministic?

When I was in grad school studying physics, I told my quantum prof that I believed the hidden variables theory of QM. He told me that he used to as well, but that a semi-recent experiment had proven that it wasn't possible for hidden variables to be the cause of QM's random outcomes.

I can't recall any of the details, as that was a long time ago. But he explained it to me, and it made sense at the time.

Things in our general experience don't behave in a quantum manner. That's why the universe at first glance appears to be deterministic. But as you delve into QM, it becomes clear that it isn't.

Note that in my opinion, I'd say that universe still follows rules, it's just that some of those rules only determine the probability of outcomes. So it's not that things are completely random, just that you can't predict the future with 100% certainty because of the variances.

If anyone's looking for a way to make Intelligent Design work, that might be it. If the universe were deterministic in the classical sense, things would get boring pretty quickly. The richness of our lives probably derives from the random element in QM, so we should be happy about it. :)
 
Forgive my ignorance, but something about viewing space as it was and galaxies as they were only a few billion years after the Big Bang bugs me. Imagine the Universe as a sphere of matter/warped space a billion light years across after the Big Bang. Well then the farthest light would have to travel from one object to another is a billion years before it would reach objects that were at the edge of the sphere. It would then pass those objects - regardless of their relative speeds - at c and would by now be many billions of light years past those objects. So why are those objects, that are now billions of light years apart still viewing the only a billion year old light?
 
Forgive my ignorance, but something about viewing space as it was and galaxies as they were only a few billion years after the Big Bang bugs me. Imagine the Universe as a sphere of matter/warped space a billion light years across after the Big Bang. Well then the farthest light would have to travel from one object to another is a billion years before it would reach objects that were at the edge of the sphere. It would then pass those objects - regardless of their relative speeds - at c and would by now be many billions of light years past those objects. So why are those objects, that are now billions of light years apart still viewing the only a billion year old light?
Your question is not stated clearly but:

We do not just see the "only a billion year old light". We see the light from all ages of the universe from about 377,000 years after the Big Bang.

Remember that the speed of light means that looking out into space is the same as looking back into time, e.g. we see Alpha Centuri as it was 4 years ago.

You also seem to be assuming that the processes that were producing light a billion years after the Big Bang created a flash of light and then stopped. This is not the case.
 

Back
Top Bottom