What is quantum entanglement? Part 6: Locality, reality, and John Bell

This is part 6 in a lengthy series of posts attempting to explain the idea of quantum entanglement to a non-physics audience.  Part 1 can be read here,  Part 2 can be read herePart 3 here,  Part 4 here, and Part 5 here.

In the last part of this series of posts, we discussed the practical implementation of entanglement using photons, which is the most common (though not only) way to study and apply entanglement in modern experiments.  In this post, we return to a bit of the history of the subject and look at a 1964 discovery that upended our understanding not only of quantum physics, but of what is knowable and unknowable in physical reality.

As in previous posts, we begin with a brief review.  The quantum theory developed in the 1920s indicates that discrete bits of matter — such as electrons and protons  — possess wave-like properties.  Physicists naturally began to ask the question: “what is doing the waving in matter?” Water waves are oscillations of water molecules, sound waves are oscillations of air molecules, but it wasn’t clear what is waving in a matter wave.

A working interpretation, as introduced by physicist Max Born, is that the probability of the particle having a particular behavior is waving!  This was elaborated upon by researchers pondering the science in Copenhagen, and the first complete description of what is going on in quantum particles is often called the “Copenhagen interpretation.”  In short, we may summarize it as follows:

All the properties of a quantum particle remain in an undetermined state, evolving as a wave, until they are measured.  Upon measurement, the part of the wave associated with the measurement collapses into a definite state, the height of the wave being a measure of how likely it is to be found in that state.

For example, if we flipped a “quantum quarter,” it would presumably be equally likely to be heads or tails until we measure it, at which point it “chooses” one of those options.  The wave of such a quantum quarter would therefore be written as

This can be interpreted as saying “the quantum state of a flipped quarter is simultaneously and equally heads and tails.” When it is measured, it is 50% likely to be found heads, 50% tails.

This interpretation was given a very serious challenge by Einstein, Podolsky and Rosen in 1935, when they looked at what happens when pairs of quantum particles are correlated. For example, suppose we have a “quantum double quarter,” with the tails glued together.

If coin 1 is measured with heads up, coin 2 is certainly tails up, and vice versa; however, it is equally likely that coin 1 or coin 2 will be the one heads up.  Until measurement, then, the double quarter is in a quantum state where each outcome is possible:

Once measured, the state collapses and is either “definitely heads 1 up” or “definitely heads 2 up.”

But the state collapse leads to an interesting problem: unlike a quantum quarter, a pair of quantum particles can be entangled and sent far away from each other, in principle even light-years apart from one another, and remain entangled.  For instance, as discussed in the previous part of this series, we can create an pair of photons (particles of light) with entangled polarizations: if one photon is horizontally polarized, the other must be vertically polarized.  This is illustrated simply below; the orange and the red colors indicate the two possible outcomes.

Suppose we send one photon to Alpha Centauri, 4.37 light-years from Earth, and keep the other here.  Before measurement, the two particles are in the following entangled state.

Let us suppose we measure the state of the photon on Earth.  If we find that it is “H,” then the wavefunction of the photon on Alpha Centauri must have collapsed to the “V” state.  This collapse must have happened instantaneously.  If it were not instantaneous, it is possible that a measurement of the photon on Alpha Centauri could happen before the collapse, in which it is also “H.”

Einstein, Podolsky and Rosen (EPR) felt that this “spooky action at a distance,” which seemingly violates relativity, demonstrates that the interpretation of quantum mechanics in terms of probabilities, waves, and wavefunction collapse must be an incomplete, if not flawed, picture.

But what was EPR’s alternative interpretation of quantum mechanics? (Now we pass from our review and into to the novel topics of this post.)  The Copenhagen interpretation suggests that the behavior of a quantum particle has a built-in randomness: there is no way to determine in advance the outcome of a measurement, because the particle or particles themselves have not yet “decided” what they’re going to do.  EPR argued, based on the seeming absurdity of entanglement, that the behavior of quantum particles is not inherently random, but that this randomness is simply an illusion created by our ignorance of the properties that really determine its behavior.

This may sound confusing, but it is easy to visualize.  In the non-quantum world, we often generate random results by the flip of a coin, or the roll of a 6-sided die.  However, the motion of the coin or die is dictated by the classical laws of physics.  Let’s focus on the coin.  In principle, if we knew exactly how much force we used to flip the coin, the starting orientation of the coin, the air resistance on the coin, and how much it bounces when it hits the table, we could calculate exactly how it would land, and could predict in advance whether it would land heads or tails.  But since we don’t know all these complicated variables that go into the flip of the coin, the outcome of the flip appears random to us.

EPR believed that the same thing is happening with quantum particles.  Though it appears that the behavior of a quantum particle is random, this randomness is only a consequence of the fact that we are ignorant of the real variables that dictate the behavior of the particle.  If we could figure out how to measure these “hidden variables,” they argued, we would find that the behavior of quantum particles is readily predictable, just like the flip of a coin discussed above.

One reason that EPR supported the idea of “hidden variables” was the concern that “spooky action at a distance” could travel faster than the speed of light and violate special relativity.  As we noted in part 4 of this series, however, it is apparently not possible to send information using “spooky action,” meaning that it has no conflict with relativity.

The other notable reason to argue in favor of hidden variables is, in essence, philosophical.  Since the time that Isaac Newton first published his groundbreaking work Principia in 1687, physicists had become increasingly fond of viewing nature as a “clockwork universe,” in which everything moves along perfectly in accordance with (non-random) laws of nature.  In a clockwork universe, everything that will happen in the future is predestined, the inexorable result of the laws at work.  But the Copenhagen view of quantum physics implies that the universe is inherently unpredictable: even the behavior of a single particle cannot be determined with certainty. This was a great philosophical shock to many at the time, and was objectionable to EPR.  Einstein summarized his objection in the famous and pithy quote, “God does not play dice.”

So, how does the universe really work? Is it random and unpredictable, or does it only appear to be so due to hidden variables?  For a number of years, this seemed like a metaphysical question, one outside of our ability to directly test one way or another.  Very early on, in 1932, the famed physicist John von Neumann produced a mathematical theorem that indicated that some versions of hidden variable theories are inconsistent with the mathematics of quantum mechanics, but it did not provide any experimental way to test the argument.  To put it in a probably oversimplified way, the choice between hidden variables and Copenhagen seemed for quite a few years to be more or less a matter of taste.

But, in the 1960s, Belfast-born physicist John Stewart Bell (1928-1990) shocked the physics community and demonstrated that this “matter of taste” has a well-defined flavor.  He showed that the most sensible versions of hidden variable theories — we will explain what I mean by “sensible” momentarily — result in predictions that are different from the results of quantum theory.  And these differences are experimentally measurable.

Bell started out in a somewhat different field of study, as a theoretical particle physicist, examining the fundamental forces that mediate the interactions between elementary particles. But he became intrigued by an alternative to the Copenhagen interpretation of quantum physics, known as pilot wave theory, expounded upon by physicist David Bohm.  Pilot wave theory — which we will return to in a future post — is a type of hidden variable theory, and Bell felt that it offered a better explanation for quantum weirdness, one that is less weird.  In order to champion pilot wave theory, however, he needed to explain away the “impossibility proof” of von Neumann, and in the early 1960s he wrote a paper in which he argued that von Neumann’s proof was fatally flawed.¹

But then, on the heels of seemingly saving hidden variables, Bell discovered his own famous theorem, showing that there are experimental differences between certain hidden variable theories and quantum physics.  He published his results in 1964 in a paper² titled, “On the Einstein Podolsky Rosen Paradox.”

Bell’s idea focuses on the correlations between entangled particles.  By “correlation,” we mean “relationship between the measured values of entangled pairs.”  We have already noted that if one entangled photon is measured to be horizontally polarized, then the other will be vertically polarized — but built into this statement is the assumption that each photon is measured the same way.

One way to measure the polarization of a photon is through the use of a polarizing beamsplitter, such as a Wollaston prism, which sends horizontally and vertically polarized photons in different directions, allowing them to be detected separately.

A Wollaston prism, via Wikipedia.

But if we turn the prism 45 degrees, we will no longer separate the light into horizontal and vertical polarization, but into +45 degree and -45 degree polarization. I attempt to illustrate this simply below.

For those who have been reading these posts from the beginning, this tilt is another example of a “change of basis” that we talked about in part 4 of this series for electrons and positrons.

John Bell asked, theoretically, what would happen if one measures the state of entangled particles with a pair of detectors (or Wollaston prisms) that are oriented differently.  For instance, we might have one prism oriented to separate H and V and the other to separate +45 and -45.  What Bell discovered is that the relationship, or correlation, between these measurements under the assumption of sensible hidden variable theories simply cannot match with the predictions of quantum physics.

Bell’s proof was a revelation, and provided a path for physicists to answer once and for all whether hidden variables or randomness rule quantum physics. However, Bell’s test of quantum physics was a bit too difficult to implement experimentally; fortunately, a few years later a quartet of researchers³ demonstrated an inequality that sensible hidden variable theories must satisfy but which conventional quantum physics can violate.  This is known as the Clauser-Horne-Shimony-Holt inequality, or CHSH inequality.  It is still somewhat difficult to see what is going on in the CHSH inequality, so we will describe here a much more recent4 and elegant formulation of it.

Let us assume that we have two experimenters, Alice and Bob5, and that they each have a detector, like a Wollaston prism, that can be oriented in two positions, to be labeled 1 and 2.

In position 1, for example, the photon can either be measured to have polarization parallel to the detector (which we assign value +1) or can be found to have polarization perpendicular to the detector (which we assign value -1).

Let us now imagine that we make a lot of measurements of entangled photon pairs for every possible combination of detector orientations.  We define P(A_1 = B_1) as the probability, or fraction of events, for which the measurement of detector A in position 1 is equal to the measurement of detector B in position 1. This probability is defined to have a value between 0 and 1, with 0 being “no chance” and 1 being “certain chance.” With some similar definitions of other probabilities, we now argue that for any “sensible” hidden variable theory, the probabilities must satisfy the following inequality:

P(A_1=B_1)+P(A_2=B_2)+P(A_1=B_2)+P(A_2\neq B_1)\leq 3.

We can show this explicitly.  If we assume that the behavior of the photons are dictated by hidden variables, then any given pair of photons has a definite value in advance for each of the measurements A_1, A_2, B_1, B_2. We can simply tabulate all of the possibilities — there are 16 in all — and see that we never get more than three “true” values for the probabilities in the above equation.

In fact, any individual photon pair in a sensible hidden variable theory will either return a value of 1 or 3 in our inequality. If there’s a mixture of photons pairs in different states, then on average the probabilities can take on some intermediate value, but never greater than 3.

What is the logic behind this inequality, i.e. how does it work?  If there is a definite value for each of the measurements, then the four outcomes are contradictory.  If B_1=A_1 and A_1=B_2 and B_2=A_2, then it follows that A_2 = B_1, which is in conflict with the fourth condition in the table, A_2 \neq B_1.

This form of the CHSH inequality may look unfamiliar to physicists; it can be written in a more recognizable form by noting that

P(A_2=B_1) = 1-P(A_2\neq B_1).

In other words, either A_2=B_1 or A_2\neq B_1.  If we use this in the inequality, we may rewrite it as:

P(A_1=B_1)+P(A_2=B_2)+P(A_1=B_2)-P(A_2= B_1)\leq 2.

This is more or less the exact form of the original CHSH inequality as formulated by C, H, S and H themselves.

So a “sensible” hidden variable theory (I still need to explain “sensible”, I know) predicts that the outcome of experiments will always produce a value of 2 or less in this inequality.  What does conventional quantum physics say?

Quantum physics predicts that the maximum value of that combination of probabilities is 2\sqrt{2}, a multiplicative factor of square root of 2 larger.  If the Copenhagen version of quantum physics is correct, the CHSH inequality will be violated.

How is this possible?  It turns out that conventional (Copenhagen) quantum physics violates two of the assumptions built into our hidden variables argument given above.  One of them is the existence of hidden variables themselves, which is usually referred to as “reality.” In other words, when then photons are created they already have “real” values for each of the measurements in a hidden variable theory.

The other assumption built into our argument, but much less obvious, is what is known as “locality,” and locality is the “sensible” part of our hidden variable theory that we’ve been referring to for some time.  Locality is the idea that the behavior of our two photons are not linked over a long distance: the behavior of each of the photons, and the outcome of their measurements, only depends on local properties of the particle, and not what is happening to the other particle.

It is in fact possible to construct nonlocal hidden variable theories, i.e. theories that have reality, but which do not possess locality, and which therefore violate our CHSH inequality.  The principle of a nonlocal hidden variable theory is that all outcomes of the measurements are completely predicable, but that those outcomes can depend not only on what the local particle is doing but on how the distant particle is measured.

I’ve constructed another flowchart below, of a somewhat different interpretation, to try and visualize this.

In this simple toy model of a nonlocal hidden variable theory, we imagine that the behavior of photon B depends on the orientation of detector A.  Now, for each of the different combinations of detector orientations, we can see that there is a definite outcome for the measurement at detector B that depends on detector A.  If we look back at the CHSH inequality that we began with, we find that all four probabilities within it are now equal to 1: we have found a state for which the sum of the four numbers equals 4, violating the inequality!

So Bell’s theorem, and the related CHSH inequality, provides an experimental test to determine whether local hidden variable theories are true or not.  If not, there are two possibilities: either quantum physics is, like the Copenhagen interpretation states, nonlocal and nonreal, or quantum physics is nonlocal but real.  This latter case cannot be ruled out, but it seems to ruin the whole point of introducing hidden variables in the first place!  Remember that EPR originally objected to “spooky action at a distance.” A nonlocal real theory keeps the “spooky action,” defeating a large part of the motivation to introduce hidden variables in the first place.

Hopefully this lengthy explanation of the “why” of Bell’s theorem wasn’t too confusing, but the “what” of Bell’s theorem is clear: it provides an experimental test to distinguish between local and nonlocal theories of quantum physics.  Almost immediately after the theorem was presented, physicists began to work hard to test it and answer the question: “local or nonlocal”?  One early attempt was done by Aspect, Grangier, and Roger6, in which they measured the polarization properties of entangled photons produced in an atomic cascade of calcium atoms.  They found that their results “strongly violate the generalized Bell’s inequalities, and rule out the whole class of realistic local theories.”

But there was a catch.  The inequality, as we have written it, assumes that all the photons that are produced are detected with 100% efficiency.  If the detector efficiency is lower, for example 80%, then the measured CHSH value will, in principle, be only 80% of its true value.  The simple solution to correct for this is simply to scale the result up: if your detector efficiency is 80%, then you scale your experimental value up to 125% to get the “true” value.

But this assumes that all photon pairs are detected equally well.  This is known as the “fair sampling assumption,” and there is no way to know for certain if it is true.  To see why this is a problem, let’s assume that we have detectors which are 80% efficient, and look again at our original flowchart:

Recall that, in this picture, every photon pair in principle gives you a result of 1 or 3. Imagine that local realism holds but, due to some physics we don’t understand, our detector only detects those photon pairs which produce a value of 3, and does so with 100% efficiency (the overall 80% efficiency results from missing all those other photons). So our CHSH measurement gives us a value of 3, which we then dutifully scale up by 125% to correct for the average detector efficiency, and — voila! — we have a mistaken CHSH value of 3.75, violating the CHSH inequality even though local realism holds.

The fair sampling assumption therefore provides a “loophole” for the CHSH inequality, in which it might appear to be violated but in reality it is not.  Several loopholes were recognized in early experimental tests of Bell’s theorem, and much of the work that has been done in the decades that have followed has focused on finding ways to remove them, by increasing detector efficiency and other clever tricks.

This work has slowly but inexorably eliminated loopholes, one by one, and at each stage of the process the CHSH inequality was still found to be violated, suggesting that locality cannot be true. In fact, over the past two years, the first few experimental tests of Bell’s theorem have been published which claim to be “loophole-free.”  One of these, which was done by a Dutch, Spanish, and UK collaboration7 and published in October 2015, used entangled electron spins separated in distance by a stunning 1.3 kilometers. An optical version was done by an Austrian, German, Swedish, Spanish and U.S. collaboration8 that used ultra-high-efficiency superconducting detectors to avoid the fair sampling assumption and other loopholes. The first research group mentioned above soon refined their experimental setup and published9 improved results in August 2016 that strengthened their case.  All these tests show clear violation of the CHSH inequality and therefore seem to demonstrate that quantum physics is a nonlocal phenomenon.

Curiously, these recent tests seem to have met with relatively little fanfare, given their significance. I suspect this is because most physicists have come to accept that the quantum world is inherently weird, and nonlocal, and cannot be easily reconciled with our everyday understanding of nature.  The wide variety of experimental tests from Bell’s time to the present, though often having loopholes, all point in the direction of nonlocal weirdness.

Bell himself was ironically disappointed in the results of the revolution he started with his 1964 paper. Later in life, he lightly lamented,

For me, it is so reasonable to assume that the photons in those experiments carry with them programs, which have been correlated in advance, telling them how to behave. This is so rational that I think that when Einstein saw that, and the others refused to see it, he was the rational man. The other people, although history has justified them, were burying their heads in the sand. … So for me, it is a pity that Einstein’s idea doesn’t work. The reasonable thing just doesn’t work.

But this raises a new question: what does it all mean? How do we interpret the strange behavior of nature on a quantum level? There is no certain answer to this yet, and I will attempt to discuss the possibilities in a future post.

**********************************

¹ In fact, as I understand it, Bell himself was wrong, along with many in the physics community.  Physicists had assumed, incorrectly, that von Neumann’s proof applied to all hidden variable theories, something that von Neumann himself didn’t intend.

² J.S. Bell, “On the Einstein Podolsky Rosen Paradox,” Physics 1 (1964), 195-200.

³ J.F. Clauser, M.A. Horne, A. Shimony, R.A. Holt, “Proposed experiment to test local hidden-variable theories”, Phys. Rev. Lett. 23 (1969), 880–884.

4 D. Collins, N. Gisin, N. Linden, S. Massar, and S. Popescu, “Bell Inequalities for Arbitrarily High-Dimensional Systems,” Phys. Rev. Lett. 88 (2002), 040404.

5 Alice and Bob are both dressed the same and have the same hairstyle because fuck gender norms.

6 A. Aspect, P. Grangier, and G. Roger, “Experimental tests of realistic local theories via Bell’s theorem,” Phys. Rev. Lett. 47 (1981), 460-463.

7 B. Hensen et al., “Loophole-free Bell inequality violating using electron spins separated by 1.3 kilometres,” Nature 526 (2015), 682-686.

8 M. Giustina et al., “Significant-loophole-free test of Bell’s theorem with entangled photons,” Phys. Rev. Lett. 115 (2015), 250401.

9 B. Hensen et al., “Loophole-free Bell test using electron spins in diamond: second experiment and additional analysis,” Scientific Reports 6 (2016), 30289.

Advertisement
This entry was posted in History of science, Physics. Bookmark the permalink.

3 Responses to What is quantum entanglement? Part 6: Locality, reality, and John Bell

  1. Blake Stacey says:

    One of the better things I’ve read about “interpretations of quantum mechanics” is an essay by Adan Cabello, which is notable because (a) it’s shorter than most opinionating in the area, and (b) it’s called “A Map of Madness.”

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.