*(Alternate titles considered for this post: Ducktoral degree, Send in the ducks, Proof by in-duck-tion, Duck Tales, Duck-ing the issue.)*

One of my specializations in optics is the theory of optical coherence, which is the theory that characterizes the random fluctuations of light, and the consequences of said fluctuations. It is typically one of the most difficult optics topics to teach beginning optics students, probably because it combines two challenging bodies of mathematics: wave theory and probability theory. Any teaching tool that can be used to help students visualize and understand the basics is welcome, though such tools are few and far between in coherence.

Enter the ducks! Early this year, some colleagues of mine published a short note pointing out that one can visualize a fundamental result from optical coherence theory, the van Cittert-Zernike theorem, by watching the waves a group of ducks generate when they splash into a pond!

The letter is by W.H. Knox, M. Alonso and E. Wolf, “Spatial coherence from ducks,” *Physics Today*, March 2010, p. 11; it can be freely read here. Though the letter describes the connection between coherence and ducks, it doesn’t explain what the van Cittert-Zernike theorem *is*, so I thought I’d fill in a bit of detail with this post!

It is first worth reviewing a few concepts from optical coherence theory that I covered in a “basics” post some time ago. Coherence may be loosely defined as the ability of a light wave to produce interference patterns, and these interference patterns are typically produced by bringing together light from two different parts of an extended wave. The only way an interference pattern manifests is if the light from the two sources have a statistical similarity to one another: that is,the vibrations of the two light waves have some definite relationship to one another:

Correlated sources of light produce interference patterns, and are said to be coherent with one another; uncorrelated sources of light do not produce interference, and are said to be incoherent.

Coherence effects are usually divided into two classes, referred to as *temporal coherence* and *spatial coherence*. Temporal coherence refers to the ability of a light wave to interfere with a time-delayed version of itself, and is usually also referred to as longitudinal coherence, as it is a measure of how related to the spectral bandwidth of the light: the more broadband the light is, the less temporally coherent it is.

We will be more concerned with spatial coherence in this post, which is the ability of light at one point in a wave to interfere with light from a spatially distinct point of the wave. This interference-causing capability can be measured using the classic Young’s double-slit experiment:

Light emanating from a thermal source (a collection of independently-radiating atoms) of transverse size is incident upon an opaque screen a distance away. The screen is perforated by two holes separated by a distance . It can be demonstrated experimentally that significant interference fringes appear on the screen only if the pinholes lie within a transverse area of the functional form,

,

where is the average wavelength of the light. This area is referred to as the *coherence area*: the larger the coherence area, the more spatially coherent the light is said to be. In other words, a light wave that is highly spatially coherent has distantly located points with correlated oscillations.

The simple formula for coherence area has a number of intriguing consequences. The most significant for our purposes is the observation that the coherence area increases with increasing ; that is, a light wave from a thermal source *becomes increasingly spatially coherent as it propagates*. This is especially relevant for astronomy, as we will discuss momentarily.

The rigorous theoretical calculation of the increase of spatial coherence on propagation was developed independently by Dutch physicists P.H. van Cittert in 1934 and F. Zernike in 1939, and is therefore known as the *van Cittert-Zernike theorem*.

At first glance, the result of the theorem is quite surprising. In general, a thermal source (like a star or lightbulb) consists of a large number of atoms that are radiating independently of one another. The light coming from different atoms cannot interfere, and at short distances from the source it is impossible to create an interference pattern with Young’s interferometer. As the light propagates, however, it somehow manages to develop the ability to cause interference with itself — how is this possible?

To understand the van Cittert-Zernike theorem, we consider a relatively simple (and silly) system consisting of only two independent atoms. One atom radiates apples, while the other radiates oranges — obviously, the light from the two atoms is completely incomparable.

So what happens when the orange light and the apple light leaves the source? Light waves spread as they travel, and the intensity of light correspondingly decreases as it travels. We now compare the total light field at two different points in space (labeled 1 and 2), not too far from the source:

The light at points 1 and 2 is a mixture of light from A and O. To get to point 1, light from O had to travel farther than light from A; the light at point 1 has more “apple” than “orange”. Similarly, the light at point 2 has more “orange” than “apple”. However, we can already say that there is some relationship between the light at points 1 and 2, because they are both partly apple and partly orange: the light is *partially coherent*.

Let us now consider two points much farther away from the atoms:

We find that the light from O and A has traveled essentially the same distance to get to point 1, and is therefore an equal mixture of apple and orange. The same argument applies to point 2. However, the light at points 1 and 2 are now essentially the same apple/orange hybrid, and identical: the light at the two points is *fully coherent*.

The crux of the van Cittert-Zernike theorem is, in essence, a geometrical effect: the farther away one travels from the source, the less significant the distance between the individual atoms (or apples and oranges) becomes. The above argument is oversimplified, because only the amplitudes of the wavefields are considered; the phases of the waves are also important, though the result is essentially the same.

So what does this have to do with ducks? If we consider a group of ducks hopping into a pond, each duck produces his/her own water wave. The ducks may be thought of as the individual “atoms” of the van Cittert-Zernike theorem; because they enter the water independently of one another, they produce waves that are independent of one another. Let’s go to video:

As the ducks enter the pond, they produce a crazy, random collection of waves. However, if one observes the waves as they travel far away, they become very regular and well-behaved circular waves: they have become spatially coherent! The ducks have very helpfully demonstrated the van Cittert-Zernike theorem for us.

The observation that the waves become circular as they propagate away from the ducks gives us another way to think about the van Cittert-Zernike theorem. A circular wave on a pond, or a spherical wave in three-dimensional space, is a fully (spatially) coherent wave, and such waves are produced by point-like excitations. An irregularly-shaped partially coherent wave can only be produced by an extended source:

As a wave becomes more coherent via the van Cittert-Zernike theorem, it becomes indistinguishable from a wave produced by a point source. From an imaging point of view, we can say that a luminous object appears more point-like the farther away we get from it!

In fact, the only way we can distinguish between a point-like object and an extended object is to collect light from an area significantly larger than the coherence area. This is, in essence, one reason why we need telescopes with very large apertures in order to image distant objects. The coherence area of sunlight, for instance, is

,

a very small number! Our small pupils are therefore sufficient to resolve an image of the sun (though, for obvious reasons, you shouldn’t look at it directly!) The coherence area for nearby stars, in comparison, is

,

a very large number! One would need a telescope of significant aperture size to resolve any features of such a star. There is a lot one could say about the relationship between spatial coherence and imaging, but we’ll leave such discussions for another time.

The charming letter by Knox, Alonso and Wolf demonstrates that even rather challenging concepts in optical physics can often be visualized by making analogies with rather familiar circumstances in our daily life. Who knew that ducks were such good physics instructors?

Optical Quackery!

Today, 13th June, is Young’s birthday.

“Optical Quackery!”

Darn — I missed that one!

(I also would’ve missed Young’s birthday.)

Make way for photons!

If you\’re reading this, you\’re all set, pdarner!

Suddenly, the HBT experiment makes much more sense. There are ducks in the stars! Thanks!

You’re welcome!

Does the intensity of light, per photon so to speak, make any difference here for when/where it become coherent? It’s an intriguing idea you take up with point sources versus extended sources.

That’s a good question. Actually, the intensity of light doesn’t effect the VCZ theorem — on average! One can only directly measure the spatial coherence by using a Young’s interferometer and measuring the visibility of interference fringes on a secondary detector. Individual photons will produce only point-like spots, but the average pattern of a large number of photons will reproduce the expected pattern predicted by VCZ.

It is sort of a general rule in coherence theory that the average coherence properties of any second-order process (a field X a field) do not depend on intensity. Higher-order processes, such as fourth-order (field X field X field X field) have effects that depend dramatically on the intensity.

its ‘energy’ I meant 🙂

Thanks.

Is there perhaps an instance where geese could give some instruction on scintillation effects? (as the ducks only seem to deal with second-order stuff)

The geese are so mean, even if they knew they wouldn’t tell anyone.

Very instructive! A question: using the language of photons for the van Cittert-Zernike theorem. it is correct to say that the photons interact between them during the propagation in vacuum?

to be more clear: bunched detected photons arrives from the same small area of the star or they become bunched during the propagation?

Hello!

Great explanation of the van Cittert-Zernike theorem! I finally understand it.

Light emitted from the sun is both spatially and temporally incoherent. As the post mentions, free space propagation implies a gain in spatial coherence….

I thought that one of the (mathematical) requirement of the van Cittert-Zernike was that the input field had to be quasi-monochromatic…only in that case the correlation function corresponds to the Fourier transform of the input field…..Is that correct?

For arbitrary input fields the gain of spatial coherence still occurs but it the situation varies from case to case….

For a fixed propagation distance, is the gain in spatial coherence the same for input fields with general spatial and temporal coherence properties? Does an incoherent field gain more than a more coherent one?

thanks

Brett

Things are a little tricky in talking about spatial coherence with broadband light, because the two quantities – spatial and temporal coherence – are not completely independent! For example, if I took a spatially coherent broadband plane wave, split it into two and time-delayed one past the coherence time, the two spatially separated beams would now be spatially incoherent.

Generally, though, we only talk about the VCZ theorem for (a) quasi-monochromatic fields, or (b) frequency-by-frequency. Each frequency of light will satisfy the VCZ theorem, and an appropriately-filtered broadband source like the sun can therefore exhibit the effect.

“Does an incoherent field gain more than a more coherent one?”

The change of coherence really depends on the initial coherence properties of the source: a fully coherent field won’t gain or lose any coherence on propagation, for example. As another weird counterexample, it is possible to make a field that

decreasesin coherence on propagation! Propagate an incoherent field to a phase conjugate mirror, at which it is partially coherent. On reflection, the propagation of the field is “reversed” and it retraces its path, becoming less coherent — to a limit — along the way!Brilliant blog and everything you say here makes perfect sense but now I’m totally confused about a different topic that I thought I understood :- Coherent illumination.

As you explain a partially coherent source can’t be resolved from a distance unless one uses an optical instrument larger than the coherence area. But what about a scene that is illuminated with a coherent source, such as an object illuminated by laser. The light reflected from the scene is spatially coherent (although not “planar”) but the scene does not appear as a point source except at very long distances, where it presumably is not a great deal more coherent than closer in…..

I don’t mean to sound like I’m splitting hairs here but is it that the resolution loss you speak of is not due to gaining coherence exactly but gaining “planarity” (which of course implies coherence).

Confused.

The concepts are a bit more complicated than I’ve described — in fact the very idea of resolution changes with coherent illumination simply due to the fact that light from two different points of the illuminated “scene” can interfere.

You are right that another way to speak of the resolution loss is gaining “planarity”! You are also right that this implies a gain of spatial coherence.

I think the way to explain what happens for coherent illumination is that one must use an optical device that can resolve the

coherent phaseof the light, as opposed to the phase of thecorrelation functionfor incoherent illumination. I’ll think about this some more…OK, been thinking/reading about this a lot over the last couple of weeks and I think I understand a few things a bit better – would really welcome your thoughts….

I think I was wrong in separating “planarity” and spatial coherence. If spatial coherence is measured as and P1 and P2 are two points in a plane (as two points tend to be!) then spatial coherence is measuring how planar the wavefront is. (I was thinking of the mutual coherence which measures ). So gaining spatial coherence as per vCZ is the same thing as gaining “planarity” I think?

So if I illuminate a scene with a laser that is (say) perfectly coherent. Even if the scene is completely two-dimensional when the light bounces off various reflectors in the scene each is a different distance from the lens (i.e. it is the hypotenuse of a right triangle between the reflector and the optical axis) . So when the wavefront reaches the lens it is now a superposition of waves that have traveled different distances and so it is not planar, it is “crumpled”. Even though each successive wavefront is crumpled in exactly the same way this is not the sort of correlation we care about (this was my earlier mistake)

Then the “crumpled” wavefront is focused by a lens to form an image. Under the interferometric view of imaging each pair of points in the lens forms a sinusoid (ala Young) and the (spatial) phase of the resulting sinusoid is determined by the (temporal) phase difference between the wavefront at each of the points. Thus the lens Fourier transforms the crumpled wavefront which re-creates the original scene (ignoring inversion and magnification) and Robert’s your mother’s brother and everyone’s happy.

Turning to spatially incoherent illumination…. If I pass the laser light though a spatial phase-randomizing diffuser before it hits the scene (say a piece of “bumpy” but perfectly clear glass) then i get “speckles” in my image because the distance from the reflector to the optical axis is now not the only thing that determines the phase of the wave when it hits the lens.

As for temporal incoherence…. If the light is polychromatic then the “Young’s sinusoids” formed by our interferometric lens no longer extend across the entire image, but are limited to a short distance centered between our two points. Hence why astronomers use filters to get better contrast in their speckle images of stars (spatially coherent light randomised by the atmosphere but temporally incoherent)….

So far so good but this creates a new problem for me, completely incoherent image formation. But this post is probably too long already so I might wait and see what you think about what I’ve said so far before I continue.

Sorry in my last post the angle brackets I used for expectation values must have been mistaken as tags by the web-browser. I’ve changed the expectation operator to E() in the copy below. Para 2 should read

I think I was wrong in separating “planarity” and spatial coherence. If spatial coherence is measured as E(U(P1,t)U*(P2,t))and P1 and P2 are two points in a plane (as two points tend to be!) then spatial coherence is measuring how planar the wavefront is. (I was thinking of the mutual coherence which measures E(U(P1,t+T)U*(P2,t))). So gaining spatial coherence as per vCZ is the same thing as gaining “planarity” I think?

Hello Dr. Skullsinthestars and everyone on this great site!

thanks again for all the competent replies.

I now got so interested in this spatial coherence stuff. I found a bunch of papers talking about partially coherent fields propagating better than fully coherent fields through atmospheric turbulence….lots of math 🙂

I know that In clean air a fully coherent beam spreads less than a partially coherent one….How is it possible that a partially coherent field spreads less in turbulence than a completely coherent one? Is there some sort of “compensation” between the turbulence induced spreading and the partial coherence induced spreading? A fully coherent beams suffers more speckle…..

Not “any” type of partial coherence will do well in turbulence, correct? What kind of partial coherence profile do we need to prescribe to the propagating field to improve its propagation through turbulence?

I have the book “Laser Physics” by Milonni. At page 610 it says that for a fully coherent beam the angle of divergence is theta ~lamda/D (where D is the beam diameter). For a beam of same diameter D but partially coherent the angle theta~lambda/d > lambda/D where d is the diameter of the coherence area of the beam…What book/ article can help me understand this simple result better? Where can I find a derivation of how the angle of divergence is related to the diameter of the coherence area?

Thanks!!!

Brett

Hi again everyone,

First let me say that for anyone who stumbles across my last two posts, I got it wrong. The expectation operator is an average over *time* not space and so coherence and planarity *are* two different things. This takes me back to my first post, I can see how increase in planarity (which includes coherence) would cause a loss of resolution but not “just” coherence.

Sounds like we have a lot in common Brett – I’m interested in coherence and turbulence too. I can’t answer your question properly but can offer a kind of hand-wavy argument….

The normal assumption in turbulence imaging is that the turbulence only affects the phase of the optical waves, not their (real) amplitude. In the incoherent regime phase is irrelevant (incoherent sources are linear in intensity) so it is not surprising that any phase changes induced by the turbulence have no effect. i.e. stars twinkle and planets don’t.

But then what I’ve said relates to image formation and I don’t really know how relevant this is to propagation. Possibly the argument is similar?? Something like “the Huygens secondary wavelets were already all jumbled up so it doesn’t matter if they get jumbled up more”?

I’m sure Dr SitS can give us a much better answer. Just thought I’d chuck my 2c in…

Cheers

Dave

i don’t need a phase conjugate mirror to make a beam become more spatially coherent on propagation. what about just a lens? for example, if i make a 4f system to expand the partially coherent beam, then the speckle size just got bigger, and thus, the coherence area got bigger. similarly, with just one lens you get some expansion or contraction, that’s changing the spatial coherence!

in any case, this post is ducking awesome!

I would be grateful if you could you please help meto answer this question, regarding the van Cittert-Zernike theorem:

The transverse coherence function of a source is measured by using double slits, varying the slit separation d, and measuring the visibility of the fringes versus d.

Is the measured coherence function the 2D Fourier transform of the intensity distribution across the source or across the double slit?

and the source is coherence!

Coherent*

Well I guess, I have my answer:

Incoherent source distributions

gain coherence on propagation. Detecting

this coherence allows source reconstruction

But what about the coherence sources? In near field? does not the van Cittert-Zernike theorem apply to them?