Thursday, 14 July 2016

Gravity (Why doesn't the Moon fall into the Earth?)


Question: Why doesn’t the Earth fall into the Sun or the Moon fall into the Earth?


A similar examination question is “explain why the satellite does not move in the direction of the gravitational force.” Students are expected to explain why the satellite is moving circularly instead of falling inwardly. They should understand how the centripetal force acting on the satellites rotates the object instead of pulling it closer to the center of the earth.

This question was first answered by Sir Isaac Newton as follows: “[i]f a lead ball were projected with a given velocity along a horizontal line from the top of some mountain by the force of gunpowder and went in a curved line for a distance of two miles before falling to the earth, then the same ball projected with twice the velocity would go about twice as far and with ten times the velocity about ten times as far, provided that the resistance of the air were removed. And by increasing the velocity, the distance to which it would be projected could be increased at will and the curvature of the line that it would describe could be decreased, in such a way that it would finally fall at a distance of 10 or 30 or 90 degrees or even go around the whole earth or, lastly, go off into the heavens and continue indefinitely in this motion. And in the same way that a projectile could, by the force of gravity, be deflected into an orbit and go around the whole earth, so too the moon, whether by the force of gravity – if it has gravity – or by any other force by which it may be urged toward the earth, can always be drawn back toward the earth from a rectilinear course and deflected into its orbit; and without such a force the moon cannot be kept in its orbit. If this force were too small, it would not deflect the moon sufficiently from a rectilinear course; if it were too great, it would deflect the moon excessively and draw it down from its orbit toward the earth. In fact, it must be of just the right magnitude, and mathematicians have the task of finding the force by which a body can be kept exactly in any given orbit with a given velocity and, alternatively, to find the curvilinear path into which a body leaving any given place with a given velocity is deflected by a given force (Newton, 1687, p. 406).” These are not the exact words of Newton, but they are translated by Cohen and Whitman such that Newton’s Principia can be more accessible for today’s scientists and students.

How would Feynman answer?

The focus is on the question “why doesn’t the moon fall into the earth?” Feynman’s answer might include the following three points: the confusing notion of fall, how the moon is falling around the earth, and problems of defining gravity.

1. The confusing notion of fallIn The Feynman Lectures on Physics, Feynman has discussed the question “why doesn’t the moon fall into the earth?” Feynman would consider the possibility of using these two descriptions: the “moon does not fall at all” and “the moon does fall.” For instance, he initially mentions that “[w]e might say that the moon does not fall at all (Feynman et al., 1963, section 7-4 Newton’s law of gravitation).” This is based on a narrower notion of fall. That is, the moon moves circularly such that it maintains the same distance from the center of the earth.

However, it is possible to calculate the distance that the moon falls in one second. According to Feynman, “[w]e can calculate from the radius of the moon’s orbit (which is about 240,000 miles) and how long it takes to go around the earth (approximately 29 days), how far the moon moves in its orbit in 1 second, and can then calculate how far it falls in one second. This distance turns out to be roughly 1/20 of an inch in a second (Feynman et al., 1963, section 7-4 Newton’s law of gravitation).” In short, the moon does fall a short distance in every second.

Importantly, in Feynman’s word, “[t]his idea that the moon falls is somewhat confusing, because, as you see, it does not come any closer. The idea is sufficiently interesting to merit further explanation: the moon falls in the sense that it falls away from the straight line that it would pursue if there were no forces (Feynman et al., 1963, section 7-4 Newton’s law of gravitation).” In essence, we can have a broader notion of “falling.” In a sense, the gravitational (attractive) force still pulls the object inward such that it does not move in a straight line. The moon does “fall away” from the straight line.

2. The moon falls around the earthFeynman has provided a good analogy that explains how an object “falls around” the earth: “[a]n object like a bullet, shot horizontally, might go a long way in one second — perhaps 2000 feet — but it will still fall 16 feet if it is aimed horizontally. What happens if we shoot a bullet faster and faster? Do not forget that the earth’s surface is curved. If we shoot it fast enough, then when it falls 16 feet it may be at just the same height above the ground as it was before. How can that be? It still falls, but the earth curves away, so it falls ‘around’ the earth (Feynman et al., 1963, section 7-4 Newton’s law of gravitation).” Simply phrased, the moon does fall around the earth, but the spherical earth curves away such that the moon does not reach the ground. In addition, the moon may maintain at different heights depending on its velocity.

Interestingly, Feynman would explain the direction of force from a historical perspective. For example, in the Messenger Lectures, Feynman (1965) elaborates that “[t]he next question was - what makes planets go around the sun? At the time of Kepler, some people answered this problem by saying that there were angels behind them beating their wings and pushing the planets around an orbit. As you will see, the answer is not very far from the truth. The only difference is that the angels sit in a different direction and their wings push inwards (p. 18).” In other words, the elliptical motions of planets around the sun are due to forces that are acting on the planets toward the sun.

3. Problems of defining gravityFeynman might explain a problem of defining gravity due to the incompleteness of knowledge. For instance, Feynman (1965) elaborates that “[a]ll we have done is to describe how the earth moves around the sun, but we have not said what makes it go. Newton made no hypotheses about this; he was satisfied to find what it did without getting into the machinery of it. No one has since given any machinery. It is characteristic of the physical laws that they have this abstract character. ... Why can we use mathematics to describe nature without a mechanism behind it? No one knows (Feynman et al., 1963, section 7-7 What is gravity).” However, there are alternative mathematical models that also explain the nature of gravity.

On the other hand, another problem of defining gravity could be related to the difficulty in detecting gravitational waves. In Feynman’s Lectures on Gravitation for advanced graduate students, Feynman (1995) explains that “the quantum aspect of gravitational waves is a million times further removed from detectability; there is apparently no hope of ever observing a graviton (p. 11).” However, Feynman also provides a thought experiment on gravitational wave detector during the Chapel Hill conference: “[i]t is simply two beads sliding freely (but with a small amount of friction) on a rigid rod. As the wave passes over the rod, atomic forces hold the length of the rod fixed, but the proper distance between the two beads oscillates. Thus, the beads rub against the rod, dissipating heat (Preskill & Thorne, 1995, pp. xxv–xxvi).” Currently, there are reports that gravitational waves have been detected due to the collision of two black holes.

More interestingly, in the Messenger Lectures, Feynman (1965) suggests that “the most impressive fact is that gravity is simple. It is simple to state the principles completely and not have left any vagueness for anybody to change the ideas of the law. It is simple, and therefore it is beautiful. It is simple in its pattern. I do not mean it is simple in its action - the motions of the various planets and the perturbations of one on the other can be quite complicated to work out, and to follow how all those stars in a globular cluster move is quite beyond our ability (pp. 33-34).” In short, the nature of gravity is simple, and thus, it is beautiful. 

       In summary, Feynman might explain that the moon does not fall into the earth, but it does fall around the earth. Simply phrased, the moon continues to fall, however, it could not reach the earth because the earth is spherical. Essentially, it is the same gravitational force that causes an apple to fall onto the earth and the moon that rotates around the earth. Nevertheless, Feynman would elaborate on problems of defining gravity, and at the same time, he would state that the nature of gravity is simple and beautiful.

References
1. Feynman, R. P. (1965). The character of physical law. Cambridge: MIT Press. 
2. Feynman, R. P., Leighton, R. B., & Sands, M. (1963). The Feynman Lectures on Physics, Vol I: Mainly mechanics, radiation, and heat. Reading, MA: Addison-Wesley. 
3. Feynman, R. P., Morinigo, F. B., Wagner, W. G. (1995). Lectures on gravitation (B. Hatfield, ed.). Reading, MA: Addison-Wesley. 
4. Newton, I. (1687/1999). The Principia: Mathematical Principles of Natural Philosophy, A New Translation (Trans. by B. Cohen, & A. Whitman). California: University of California Press. 
5. Preskill, J., & Thorne, K. S. (1995). Foreword to Feynman Lectures on Gravitation. In R. P. Feynman, F. B. Morinigo, & W. G. Wagner. Lectures on gravitation (B. Hatfield, ed.). Reading, MA: Addison-Wesley.

Sunday, 26 June 2016

The blue sky (Rayleigh scattering?)


Question: Why is the sky blue?


The sky may appear red or orange near sunrise and sunset. The question could be more appropriately rephrased as “Why does the sky appear blue during a cloudless day?” However, the appearance of blue sky is sometimes simply explained to be due to Rayleigh scattering. The use of the phrase “Rayleigh scattering” does not mean a genuine understanding of the phenomenon. In the words of John William Strutt (Lord Rayleigh), “[w]henever the particles of the foreign matter are sufficiently fine, the light emitted laterally is blue in colour, and, in a direction perpendicular to that of the incident beam, is completely polarized (Strutt, 1871, p. 107).” Simply put, small particles in the atmosphere are set in oscillation by incoming light rays and the particles emit polarized light rays in various directions.

How would Feynman answer?

Feynman would answer that the sky appears blue because of the scattering of sunlight by atoms including electrons, and it is related to the human eye’s sensitivity to the blue light. More importantly, Feynman would elaborate his answer with regard to the definition of scattering, the scatterers (the objects involved in scattering), and the human visual system (a problem in defining blue).

1. The definition of scattering: Based on the classical theory, light scattering is a phenomenon in which “a light wave from some source can induce a motion of the electrons in a piece of material, and these motions generate their own waves (Feynman et al., 1963, section 30–2 The diffraction grating).” In other words, the light rays oscillate the electrons in atoms, and their motions can generate light waves in various directions. Essentially, the oscillations of an electron can be modeled by using Newton’s law of motion and thus, expressed as (d2x/dt2) + γ(dx/dt) + ω02x = F/m (Feynman et al., 1963, section 23–2 The forced oscillator with damping).

By solving the equation, we can derive the total amount of light energy per second, scattered in all directions by the single atom, P = (½ϵ0cE02)(8πr02/3)[ω4/(ω2ω02)2]. This means that, to a first approximation, the intensity of scattered light rays is proportional to the fourth power of the frequency (or 1/λ4). Feynman explains that “light which is of higher frequency by, say, a factor of two, is sixteen times more intensely scattered, which is a quite sizable difference. This means that blue light, which has about twice the frequency of the reddish end of the spectrum, is scattered to a far greater extent than red light. Thus when we look at the sky it looks that glorious blue that we see all the time! (Feynman et al., 1963, section 32-5 Scattering of light)”

Alternatively, the scattering mechanism can be explained by the quantum theory. In short, Feynman considers the scattering of light as a two-step process: “[t]he photon is absorbed, and then is re-emitted (Feynman et al., 1966, section 182 Light scattering).” The photon, whether it is left or right circularly polarized (RHC), is initially absorbed in an atom. Next, the photon is re-emitted in another direction due to the oscillating electric field of the atom. For example, “with an incoming beam of RHC light the intensity of the RHC light in the scattered radiation will vary as (1 + cos θ)2 (Feynman et al., 1966, section 182 Light scattering).” However, the oscillating direction of the emitted photon remains the same as the atom and the incident photon.

2. The scatterers: The scattering mechanism is dependent on the scatterers such as hydrogen atoms. In Alix G. Mautner Memorial Lectures, Feynman (1985) clarifies that “[a]toms that contain more than one proton and the corresponding number of electrons also scatter light (atoms in the air scatter light from the sun and make the sky blue)! (p. 100)” In addition, the scattering amplitude can be calculated by using quantum electrodynamics: “[t]he total amplitude for all the ways an electron can scatter a photon can be summed up as a single arrow, a certain amount of shrink and turn. This amount depends on the nucleus and the arrangement of the electrons in the atoms, and is different for different materials (Feynman, 1985, p. 101).”

The scattering of light is related to the atomic size and its resonant frequency. If the frequency of the light rays is closer to the atom’s resonant frequency, the atom tends to oscillate more vigorously. Feynman proposes the following experiment: “[w]e can make particles that are very small at first, and then gradually grow in size. We use a solution of sodium thiosulfate with sulfuric acid, which precipitates very fine grains of sulfur. As the sulfur precipitates, the grains first start very small, and the scattering is a little bluish. As it precipitates more it gets more intense, and then it will get whitish as the particles get bigger… That is why the sunset is red, of course, because the light that comes through a lot of air, to the eye has had a lot of blue light scattered out, so it is yellow-red (Feynman et al., 1963, section 32-5 Scattering of light).”

The amount of scattering may also vary with the atoms’ locations. To cite Feynman, “if the atoms are very beautifully located in a nice pattern, it is easy to show that we get nothing in other directions, because we are adding a lot of vectors with their phases always changing, and the result comes to zero. But if the objects are randomly located, then the total intensity in any direction is the sum of the intensities that are scattered by each atom, as we have just discussed (Feynman et al., 1963, section 32-5 Scattering of light).” Moreover, the atoms are in motion such that the relative phases between any two atoms continue to change, and there is no continuous constructive interference or stronger scattering in a particular direction.

3. The human visual system: The sensation of “blue sky” is dependent on the human visual system (rods and cones). The sky does not appear violet (shortest visible wavelength) because our eyes are less sensitive to violet light.  In the words of Feynman, “we do not try to define what constitutes a green sensation, or to measure in what circumstances we get a green sensation, because it turns out that this is extremely complicated... Then we do not have to decide whether two people see the same sensation in different circumstances (Feynman et al., 1963, section 35-3 Measuring the color sensation).” Simply phrased, human beings may have different sensations of blue color.

The blue color can be measured as light rays that have wavelengths between 440 to 492 nm (Hoeppe, 2007). However, it is difficult to distinguish the two colors, blue and green, in the dark. The sensation of a color is dependent on the light intensity. Feynman explains that “[i]f we are in the dark and can find a magazine or something that has colors and, before we know for sure what the colors are, we judge the lighter and darker areas, and if we then carry the magazine into the light, we may see this very remarkable shift between which was the brightest color and which was not (Feynman et al., 1963, section 35-2 Color depends on intensity).” This phenomenon is related to the Purkinje effect and it poses a problem of defining blue color precisely.

Finally, it is worthwhile mentioning that the visual system of other living beings and how they perceive the sunlight. Feynman elaborates that “bees can apparently tell the direction of the sun by looking at a patch of blue sky, without seeing the sun itself. We cannot easily do this. If we look out the window at the sky and see that it is blue, in which direction is the sun? The bee can tell, because the bee is quite sensitive to the polarization of light, and the scattered light of the sky is polarized (Feynman et al., 1963, section 36-4 The compound (insect) eye).” However, there should be more research on the sensitivity of visual system of living beings.

       In summary, the light rays from the Sun oscillate the electrons in atoms and their motions can re-emit the light waves in all directions. The amount of scattering is dependent on the scatterers such as hydrogen atoms as well as their size and random locations. Importantly, the sky appears to be blue in color because the human visual system is less sensitive to violet which has a shorter wavelength.

Note:
1. A Feynman diagram of light scattering can be found in page 100 of QED: The strange theory of light and matter (Feynman, 1985).

2. The color of the sky can be used to estimate the size of air molecules by assuming the density fluctuations in the atmosphere. To quote Einstein (1910), “[a]s a rough calculation shows, this formula might very well explain why the light given off by the irradiated atmosphere is predominantly blue. In this connection it is worth noting that our theory does not make any direct use of the assumption of the discrete distribution of matter (p. 247).”

3. Feynman’s explanation of the polarized sky: “The first example of the polarization effect that we have already discussed is the scattering of light. Consider a beam of light, for example from the sun, shining on the air. The electric field will produce oscillations of charges in the air, and motion of these charges will radiate light with its maximum intensity in a plane normal to the direction of vibration of the charges. The beam from the sun is unpolarized, so the direction of polarization changes constantly, and the direction of vibration of the charges in the air changes constantly. If we consider light scattered at 90o, the vibration of the charged particles radiates to the observer only when the vibration is perpendicular to the observer’s line of sight, and then light will be polarized along the direction of vibration. So scattering is an example of one means of producing polarization (Feynman et al., 1963, section 33-2 Polarization of scattered light).”

References:
1. Einstein, A. (1910). The Theory of Opalescence of Homogeneous Fluids and Liquid Mixtures near the Critical State. Annalen der Physik, 33, 1275–1298. In The Collected Papers of Albert Einstein, Volume 3: The Swiss Years: Writings 1909-1911 (Translated by A. Beck & D. Howard). Princeton: Princeton University Press. 
2. Feynman, R. P. (1985). QED: The strange theory of light and matter. Princeton: Princeton University Press.
(245of 452)3. Feynman, R. P., Leighton, R. B., & Sands, M. L. (1963). The Feynman Lectures on Physics, Vol I: Mainly mechanics, radiation, and heat. Reading, MA: Addison-Wesley. 
4. Feynman, R. P., Leighton, R. B., & Sands, M. L. (1966). The Feynman Lectures on Physics, Vol III: Quantum Mechanics. Reading, MA: Addison-Wesley. 
5. Hoeppe, G. (2007). Why the Sky Is Blue: Discovering the Color of Life. Princeton, NJ: Princeton University Press. 
6. Strutt, J. W. (1871). XV. On the light from the sky, its polarization and colour. The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science, 41(271), 107-120.

Sunday, 12 June 2016

Entropy (remains unchanged or increases?)


Question: Explain the change, if any, in the entropy of an ideal gas when it has completed one cycle.


In general, the entropy of an ideal gas is a state function which is path independent. Thus, the entropy of the ideal gas remains unchanged in a cyclic process because it is a state function. Interestingly, Swendsen (2011) provided a list of physicists’ disagreements on the meaning of entropy:

1. The basis of statistical mechanics: theory of probability or other theory?
2. The entropy of an ideal classical gas of distinguishable: extensive or not extensive?
3. The properties of macroscopic classical systems with distinguishable and indistinguishable particles: same or different?
4. The entropy of a classical ideal gas of distinguishable particles: additive or not additive?
5. Boltzmann defined (or did not define?) the entropy of a classical system by the logarithm of a volume in phase space.
6. The symbol W in the equation S = k log W, which is inscribed on Boltzmann’s tombstone is a volume in phase space or the German word “Wahrscheinlichkeit” (probability)?
7. The entropy should be defined in terms of the properties of an isolated system or a composite system?
8. The validity of thermodynamics: a finite system or in the limit of infinite system size?
9. Extensivity is (or is not?) essential to thermodynamics.

However, there are also disagreements such as whether we should define entropy by using dS = dQ/T. One may consider the entropy of an isolated system can be well defined even when its temperature is unknown, undefined, or zero.
(e.g. https://www.av8n.com/physics/thermo/entropy.html#sec-not-dq)

How would Feynman answer?


Feynman would explain that the entropy of the ideal gas remains constant or increases after it has completed one cycle. However, it is more meaningful to understand his explanations from the perspectives of “state function,” “idealized process,” and “problems of defining entropy.”


1. Entropy is a function of the condition: Generally speaking, the concept of entropy may be defined as a measure of disorder. Thus, Feynman explains that “[i]f we have white and black molecules, how many ways could we distribute them among the volume elements so that white is on one side, and black on the other? On the other hand, how many ways could we distribute them with no restriction on which goes where? Clearly, there are many more ways to arrange them in the latter case. We measure ‘disorder’ by the number of ways that the insides can be arranged, so that from the outside it looks the same. The logarithm of that number of ways is the entropy (Feynman et al., 1963, section 46–5 Order and entropy).” However, entropy is also dependent on our knowledge of the possible locations of the molecules. In Feynman’s (1996) words, “[t]his concept of ‘knowledge’ is extremely important, and central to the concept of entropy (p. 142).”

Furthermore, Feynman mentions that “we have found another quantity which is a function of the condition, i.e., the entropy of the substance. Let us try to explain how we compute it, and what we mean when we call it a “function of the condition (Feynman et al., 1963, section 44-6 Entropy).” That is, entropy is a state function and the change in entropy is dependent on the initial and final state of a system. Importantly, Feynman states that “it applies only to reversible cycles. If we include irreversible cycles, there is no law of conservation of entropy (Feynman et al., 1963, section 44-6 Entropy).” In other words, the change in entropy is dependent on the process such as a reversible process or irreversible process.

2. A reversible process is an idealized process: The reversible process is a quasi-static process in which its direction can be reversed by means of infinitesimal change. Feynman clarifies that “in any process that is irreversible, the entropy of the whole world is increased. Only in reversible processes does the entropy remain constant. Since no process is absolutely reversible, there is always at least a small gain in the entropy; a reversible process is an idealization in which we have made the gain of entropy minimal (Feynman et al., 1963, section 44-6 Entropy).” In short, a reversible process is an idealized process that is unlikely possible in the real world. Thus, in Feynman Lectures on Computation, Feynman (1996) elaborates that “[f]or an irreversible process, the equality is replaced by an inequality, ensuring that the entropy of an isolated system can only remain constant or increase (p. 141).” However, the entropy of the universe still increases even the process is reversible.

Importantly, the concept of entropy could be defined in terms of reversible engines. According to Feynman, “we will lose something if the engines contain devices in which there is friction. The best engine will be a frictionless engine. We assume, then, the same idealization that we did when we studied the conservation of energy; that is, a perfectly frictionless engine (Feynman et al., 1963, section 44-3 Reversible engines).” That is, the reversible engines should be ideally frictionless. Therefore, Feynman explains that “the ideal engine is a so-called reversible engine, in which every process is reversible in the sense that, by minor changes, infinitesimal changes, we can make the engine go in the opposite direction. That means that nowhere in the machine must there be any appreciable friction (Feynman et al., 1963, section 44-3 Reversible engines).” Simply phrased, the reversible engine is a theoretical idealization.

3. Problems of defining entropyCurrently, there is no consensual definition of entropy among physicists. In a footnote of Feynman Lectures on Computation, it is stated that “[l]egend has it that Shannon adopted this term on the advice of John von Neumann, who declared that it would give him ‘... a great edge in debates because nobody really knows what entropy is anyway’ (Feynman, 1996, p. 123).” Thus, we may expect the concept of entropy to be redefined in the future. More importantly, Feynman emphasizes that the expression ∆S = ∫dQ/T does not completely define the entropy, but the difference of entropy between two different physical conditions. He explains that “[o]nly if we can evaluate the entropy for one special condition can we really define S absolutely (Feynman et al., 1963, section 44-6 Entropy).” For example, we can calculate the final entropy of an isolated system by adding the change in entropy to the initial entropy. In addition, one may need to determine the initial entropy or the entropy of the system at absolute zero.

Interestingly, the concept of entropy is related to a problem of defining reversible processes. For example, in Samiullah’s (2007) words, “reversible processes are idealized processes in which entropy is exchanged between a system and its environment and no net entropy is generated (p. 609).” Essentially, he proposes to define a reversible process in terms of the constancy of entropy such that it distinguishes reversible processes from quasi-static processes. However, there is a circularity problem if the reversible process is defined in terms of entropy, whereas the concept of entropy is defined in terms of the reversible process. Perhaps one may still argue whether this problem of circularity is trivial or unavoidable.

       In summary, the entropy of the ideal gas could remain constant or increase after a cyclic process. Importantly, a reversible process is an idealized process in which its direction can be “reversed” by infinitesimally small and extremely slow steps. However, Feynman would also discuss problems of defining entropy.

Note:
You may want to take a look at this website:
http://physicsassessment.blogspot.sg/2016/06/ib-physics-2015-higher-level-paper-2_9.html

References:
1. Feynman, R. P., Hey, J. G., & Allen, R. W. (1998). Feynman lectures on computation. Reading, Massachusetts: Addison-Wesley.
2. Feynman, R. P., Leighton, R. B., & Sands, M. (1963). The Feynman Lectures on Physics, Vol I: Mainly mechanics, radiation, and heat. Reading, MA: Addison-Wesley.
3. Swendsen, R. H. (2011). How physicists disagree on the meaning of entropy. American Journal of Physics79(4), 342-348.

Thursday, 19 May 2016

The quantization of energy (discrete or continuous?)


Question: If the energy can change only by quanta of hf, calculate the actual change in amplitude when the energy changes by one quantum. Why does the decay or increase of the oscillatory motion of springs, pendulums, and the like generally seem to us continuous when it is really quantized? (Holton & Brush, 2001).


This question is about the nature of energy which may be described as continuous and discrete. Currently, many physics textbook authors state that energy is ultimately composed of indivisible (or irreducible) tiny lumps. For example, some authors elaborate that particles can have only certain amount of energy such as e, 2e, 3e etc., but they cannot have 1.5e or include any other fractional multiple of e. Furthermore, some physicists consider the nature of physical quantities, such as space and time, are fundamentally discrete rather than continuous. Thus, physics students may have the incorrect idea that energy must only be discrete rather than continuous.

Historically speaking, Planck is sometimes known as the reluctant father of quantum theory. For instance, Kragh (1999) writes that “Planck did not really understand the introduction of energy elements as a quantization of energy, i.e., that the energy of the oscillators can attain only discrete values (p. 62).” However, Planck is rightly cautious on the nature of energy. It is possible that deceleration of “free” electrons can result in the emission of a continuous spectrum of electromagnetic radiation. In 1909, Sommerfeld coins the term Bremsstrahlung that means “braking radiation.”

How would Feynman answer? 

Feynman would answer that the nature of energy can be discrete or continuous. We should understand his position from the perspective of “bound/unbound state,” “energy band,” and “definition of energy.”

1. Bound/unbound stateNo physical law states that energy must always consist of definite quanta or amounts. Feynman mentions that “[y]ou may have heard that photons come out in blobs and that the energy of a photon is Planck’s constant times the frequency. That is true, but since the frequency of light can be anything, there is no law that says that energy has to be a certain definite amount (Feynman et al., 1963, section 4–4 Other forms of energy).” It is simply not true that the energy of a photon must come in certain discrete lumps. Technically speaking, the energy of electromagnetic radiation is quantized because electrons bounded in atoms have discrete energy levels. Therefore, the energy emitted by the atoms is related to the difference or spacing between these energy levels, and it is numerically equal to E = hf.

Importantly, Feynman provides an excellent analogy on the discreteness of energy: “if sound is confined to an organ pipe or anything like that, then there is more than one way that the sound can vibrate, but for each such way there is a definite frequency. Thus, an object in which the waves are confined has certain resonance frequencies. It is, therefore, a property of waves in a confined space—a subject which we will discuss in detail with formulas later on - that they exist only at definite frequencies. And since the general relation exists between frequencies of the amplitude and energy, we are not surprised to find definite energies associated with electrons bound in atoms (Feynman et al., 1963, section 38–5 Energy levels).” In short, the resonant frequencies of confined sound waves are similar to the definite energy levels of bounded electrons.

In general, a free particle which is not bound to an atom can have continuous (or unrestricted) energy levels. For instance, “[w]hen the electron is free, i.e. when its energy is positive, it can have any energy; it can be moving at any speed. But bound energies are not arbitrary (Feynman et al., 1963, section 38-5 Energy levels).” In other words, “[i]f the energy E is above the top of the potential well, then there are no longer any discrete solutions, and any possible energy is permitted. Such solutions correspond to the scattering of free particles by a potential well (Feynman et al., 1966, section 16–6 Quantized energy levels).” To summarize, bounded particles have discrete energy levels, whereas free particles have continuous energy levels (See Fig 1).




 Fig 1

2. Energy band: In the band theory of solids, an energy band consists of a large number of energy levels that are very close together. In Feynman Lectures on Computation, “this theory predicts that the possible physical states that can be occupied by electrons within a material are arranged into a series of (effectively continuous) strata called ‘bands,’ each characterized by a specific range of energies for the allowed electron energy levels within it (Feynman et al., 1998, p. 213).” Loosely speaking, textbook authors may describe energy levels of electrons in a semiconductor to be essentially continuous. However, we can empirically determine the energy gap between the conduction band and valence band rather than the discreteness of energy levels within an energy band with current instrumentations and techniques.

Theoretically speaking, physicists may deduce the energy gaps and energy bands (continuous energy levels) by using Kronig and Penney (one-dimensional lattice) model. For example, Feynman explains that “you can see from the figure, the energy can go from (E− 2A) at k = 0 to (E+ 2A) at k = ±π/b. The graph is plotted for positive A; if A were negative, the curve would simply be inverted, but the range would be the same. The significant result is that any energy is possible within a certain range or “band” of energies, but no others (Feynman et al., 1966, section 13–2 States of definite energy).” Nevertheless, possible energy levels of electrons in a semiconductor crystal may be as many as the number of atoms based on a finite lattice model. The number can be related to the Avogadro’s constant that is of the order of 1023.

3. Definition of energy: According to Feynman, “[i]t is important to realize that in physics today, we have no knowledge of what energy is (Feynman et al., 1963, section 4–1 What is energy?).” This does not mean that he has no knowledge of energy. In addition, Feynman did not explicitly say that energy cannot be defined. However, Feynman does have deep knowledge of energy such as the quantization of energy. Moreover, in his Ph.D. thesis, Feynman (1942) proves the conservation of energy by using the transformation of time-displacement. Interestingly, Feynman adopted a more general approach than Noether and he did not know about the Noether’s theorem (Mehra, 1994).

Currently, physics textbooks define energy as the “ability (or capacity) to perform work or “a measure of change” (e.g. Hecht, 2003). These textbook definitions of energy include possible “effects” of energy, but they do not specify the nature of energy. The term “ability” or “capability” does not specifically tell us what energy is. In his autobiography, Feynman (1985) mentions that “[i]t’s also not even true that ‘energy makes it go,’ because if it stops, you could say, ‘energy makes it stop’ just as well (p. 298).” Thus, it is likely that Feynman does not agree with common definitions of energy.

However, Feynman clarifies that “we do not understand this energy as counting something at the moment, but just as a mathematical quantity, which is an abstract and rather peculiar circumstance (Feynman et al., 1963, section 4–4 Other forms of energy).” Essentially, energy is an abstract mathematical quantity. Furthermore, energy is not something concrete like children’s toy blocks (Feynman et al., 1963, section 4–1 What is energy?).” In other words, energy is not a material substance and it is given meaning through mathematical calculations.

       To conclude, Feynman would explain that bounded particles have discrete energy levels, whereas free particles have continuous energy levels. In addition, he would elaborate that an energy band consists of a large number of energy levels that are very close together or effectively continuous. However, Feynman disagrees with common definitions of energy and considers energy as a mathematical abstraction.

Note:
1. It is inappropriate to quote Feynman that “we have no knowledge of what energy is” and then conclude that the concept of energy cannot be defined at all. In Feynman’s words, “[d]uring the war, I didn’t have time to work on these things very extensively, but wandered about on buses and so forth, with little pieces of paper, and struggled to work on it and discovered indeed that there was something wrong, something terribly wrong. I found that if one generalized the action from the nice Langrangian forms (2) to these forms (1) then the quantities which I defined as energy, and so on, would be complex. The energy values of stationary states wouldn’t be real and probabilities of events wouldn’t add up to 100%. That is, if you took the probability that this would happen and that would happen - everything you could think of would happen, it would not add up to one (Feynman, 1965, p. 22).”

2. In A survey of physical theory, Planck (1925) writes that “[e]nergy itself cannot be measured, but only a difference of energy. Therefore, one did not previously deal with energy, but with work, and Ernst Mach, who was concerned to a great extent with the conservation of energy, but avoided all speculations outside the domain of observation, has always refrained from talking of energy itself (pp. 106-107).”

References
1. Feynman, R. P. (1942). Feynman thesis: A New approach to Quantum Theory. Singapore: World Scientific.
2. Feynman, R. P. (1965). The development of the space-time view of quantum electrodynamics. In Brown, L. M. (ed.), Selected papers of Richard Feynman. Singapore: World Scientific.
3. Feynman, R. P. (1985). Surely you’re joking, Mr. Feynman. New York: Norton.
4. Feynman, R. P., Hey, J. G., & Allen, R. W. (1998). Feynman lectures on computation. Reading, Massachusetts: Addison-Wesley.
5. Feynman, R. P., Leighton, R. B., & Sands, M. (1963). The Feynman Lectures on Physics, Vol I: Mainly mechanics, radiation, and heat. Reading, MA: Addison-Wesley.
6. Feynman, R. P., Leighton, R. B., & Sands, M. L. (1966). The Feynman Lectures on Physics, Vol III: Quantum mechanics. Reading, MA: Addison-Wesley.
7. Hecht, E. (2003). Physics: Algebra/Trigonometry (3rd ed.). Pacific Grove, California: Brooks/Cole Publishing.
8. Holton, G. and Brush, S. G. (2001). Physics the Human Adventure: From Copernicus to Einstein and beyond. New Brunswick: Rutgers University Press.
9. Kragh, H. (1999). Quantum generations: A history of physics in the twentieth century. Princeton: Princeton University Press.
10. Mehra, J. (1994). The Beat of a Different Drum: The life and science of Richard Feynman. Oxford: Oxford University Press.
11. Planck, M. (1925/1993). A Survey of Physical Theory. Ontario: Dover.

Wednesday, 27 April 2016

Nature of plane mirror image


Question: What are the characteristics of an image formed by a plane mirror?


In general, the characteristics of a plane mirror image can be described as follows:
1. The image is left-right reversed or front-back reversed in relation to the object.
2. The image is virtual because it cannot be formed on a screen.
3. The image has the same size as the object.
4. The image is upright.
5. The image is located at the same distance behind the mirror as the object is in front of the mirror. (Image distance = object distance)
6. The image has the same color as the object.

It has been controversial whether the nature of image formed by a plane mirror should be described as “left-right reversed” or “front-back reversed.” This is related to an interesting question, why does a mirror reverse left and right but not up and down?Currently, the plane mirror image could be specified in textbooks as “lateral inverted” (Muncaster, 1993), “left-right reversed” (Cutnell & Johnson, 2004), “appears left-right reversed” (Giancoli, 2005), “front-back reversed” (Knight, 2004), or “depth inverted” (Tipler & Mosca, 2004). Interestingly, Tomonaga, who shared the 1965 Nobel (Physics) Prize with Feynman and Schwinger, discussed the mirror reflection problem with his colleagues and believed that the top–bottom and front–back axes had absoluteness in a “psychological space” (Tabata & Okuda, 2000; Tomonaga, 1965). However, there is no agreement on the description and explanation of the plane mirror image.
The concept of the plane mirror image is related to terms such as “parity,” “enantiomorph,” and “chirality.” For example, Lord Kelvin defines the concept of chirality in a footnote of a lecture, titled The molecular tactics of a crystal. This famous footnote reads:

“I call any geometrical figure, or group of points, chiral, and say that it has chirality, if its image in a plane mirror, ideally realized, cannot be brought to coincide with itself. Two equal and similar right hands are homochirally similar. Equal and similar right and left hands are heterochirally similar or ‘allochirally’ similar (but heterochirally is better). These are also called ‘enantiomorphs,’ after a usage introduced, I believe, by German writers. Any chiral object and its image in a plane mirror are heterochirally similar (Kelvin, 1894, p. 27).”

The term chirality is derived from the Greek word “hand.” Naturally, human hands are chiral objects because the left hand, for example, is a non-superimposable mirror image of the right hand. In other words, an object is chiral if it cannot be brought to coincide with itself by rotations and translations alone.

How would Feynman answer?

Feynman’s answer may include the concepts “front-back reversed” and “handedness of an object” pertaining to the nature of plane mirror image. However, it is more meaningful to understand his explanations on “front-back reversed,” “handedness of an object,” and “definitions of left and right.”

1. Front-back reversed: During a BBC Television interview, Feynman (1994) explains that “if you wave one hand, then the hand in the mirror that waves is opposite it – the hand on the ‘east’ is the hand on the ‘east,’ and the hand on the ‘west’ is the hand on the ‘west.’ The head that’s up is up, and the feet that are down are down. So everything’s really all right. But what’s wrong is that if this is ‘north,’ then your nose is to the back of your head, but in the image, the nose is to the ‘south’ of the back of your head. What happen is, the image has neither the right nor the left mixed up with the top and the bottom, but the front and the back have been reversed, you see (p. 37).” In a sense, there is a semantic problem in describing the nature of plane mirror image (Ansbacher, 1992). It is remarkable that Feynman is not constrained by the words, “left” and “right,” and he is able to replace them by “east” and “west.” Thus, the plane mirror image can be simply described as either “north-south reversed” or “front-back reversed.”

In general, the choice of phrase such as “front-back reversed” is imprecise. To be more precise, “[a] mirror image reproduces exactly all object points in two spatial directions parallel to the mirror surface, but reverses the sequential ordering of object points in the direction of the third spatial axis, perpendicular to the mirror plane (Galili & Goldberg, 1993, p. 463).” In other words, a plane mirror does not vary the coordinates such as y and z in the two-dimensional planes that are parallel to the mirror, but it reverses the “x” coordinates, for example, that are in the same direction as the axis of the mirror. However, physics teachers may find it cumbersome to describe the nature of plane mirror image in greater detail.

In short, the nature of plane mirror image may appear as “left-right reversed,” “top-bottom reversed,” and “front-back reversed” (See Fig 1a, Fig 1b, and Fig 1c). According to Feynman (1994), “we say left and right are interchanged, but really the symmetrical way is it’s along the axis of the mirror that things get interchanged (p. 38).” The descriptions of the plane mirror image are dependent on the “axis of the mirror.” In Fig 1a, the plane mirror image of a right-handed glove appears as “left-right reversed.” The axis of this mirror is in a horizontal direction and the mirror is placed beside the glove. In Fig 1b, the plane mirror image of the upright glove appears as “top-bottom reversed.” The axis of the mirror is in a vertical direction and the mirror is placed below the glove. In Fig 1c, the plane mirror image of the glove appears as “front-back reversed.” The axis of this mirror is horizontal and the mirror is placed in front of the glove. Essentially, the location of the mirror relative to the object affects descriptions of the plane mirror image.



Fig 1a. Mirror beside the object (“left-right reversed”)

 
Fig 1b. Mirror below the object        (“top-bottom reversed”)

 
Fig 1c. Mirror in front of the object          (“front-back reversed)

2. The handedness of an objectAlthough Feynman explains that a characteristic of the plane mirror image is front-back reversed, he also describes the handedness of an object. Feynman provides the following example: “[t]he first molecule, the one that comes from the living thing, is called L-alanine. The other one, which is the same chemically, in that it has the same kinds of atoms and the same connections of the atoms, is a ‘right-hand’ molecule, compared with the ‘left-hand’ L-alanine, and it is called D-alanine… (Left-handed sugar tastes sweet, but not the same as right-handed sugar.) So it looks as though the phenomena of life permit a distinction between ‘right’ and ‘left,’ or chemistry permits a distinction because the two molecules are chemically different (Feynman et al., 1963, section 52–4 Mirror reflections). Simply phrased, the chemical and physical properties of the object are dependent on its handedness.

The handedness of a particle is an important concept which helps to understand the principle of the conservation of parity (mirror symmetry). As an analogy, Swedish physicist Cecilia Jarlskog identified a similarity between left-handed neutrinos and “vampire”: they do not have a mirror image (t’Hooft, 1997). In other words, the plane mirror changes the handedness of a particle, and this “mirror particle” may or may not be observed in nature. In essence, nature has a preference on the handedness of the particle, and it does not conform to the conservation of parity principle. Importantly, T. D. Lee and C. N. Yang (1956) predicted three experiments that illustrate the non-conservation of parity in weak interactions. It resolves the famous “tau-theta puzzle” pertaining to the decay of kaons, which supposedly have the same mass but they can decay into products of opposite parity.

The non-conservation of parity should not be simply illustrated by the handedness of a particle. It also involves physical conditions such as very low temperature and strong magnetic field. To quote Feynman, “When we put cobalt atoms in an extremely strong magnetic field, more disintegration electrons go down than up. Therefore, if we were to put it in a corresponding experiment in a “mirror,” in which the cobalt atoms would be lined up in the opposite direction, they would spit their electrons up, not down; the action is unsymmetrical (Feynman et al., 1963, section 52-7 Parity is not conserved!).” In this experiment, the observation of a preferred direction of decays helps to establish the violation of parity. It is pertinent to understand how the “mirror condition” such as the magnetic field is related to the handedness of the object (e.g. cobalt atoms).

3. Definitions of “left” and “right”Feynman would discuss the problems of defining “left” and “right” and explain that “the world does not have to be symmetrical. For example, using what we may call ‘geography,’ surely ‘right’ can be defined. For instance, we stand in New Orleans and look at Chicago, and Florida is to our right… (Feynman et al., 1963, section 52–4 Mirror reflections).” That is, it is possible to define “right” and “left” by using geography because there is no symmetry between two locations such as Chicago and Florida. However, the directions for up-down, left-right and front-back are arbitrarily defined depending on one’s orientation and perspective. Generally speaking, definitions of left and right are ambiguous due to possible rotations of an observer about a vertical axis. Thus, one should initially define the directions of “up,” “down,” “front,” and “back.”

Interestingly, Feynman would explore how to tell a Martian the definitions of “left” and “right.” During a lecture at Cornell University, he mentions the following procedure: “take a radioactive stuff, a neutron, and look at the electron which comes from such a beta-decay. If the electron is going up as it comes out, the direction of its spin is into the body from the back on the left side. That defines left. That is where the heart goes (Feynman, 1965, p. 103).” Alternatively, in The Feynman Lectures on Physics, he describes the Wu et al. (1957) experiment: “build yourself a magnet, and put the coils in, and put the current on, and then take some cobalt and lower the temperature. Arrange the experiment so the electrons go from the foot to the head, then the direction in which the current goes through the coils is the direction that goes in on what we call the right and comes out on the left (Feynman et al., 1963, section 52-7 Parity is not conserved!).” Nevertheless, Feynman also defines the direction of “top” and “bottom” in this experiment. 

Ideally, the definitions of “right” and “left” should not be dependent on history and convention (Feynman et al., 1963, section 52-4 Mirror reflections). As an example, most screws have right-handed threads which are arbitrarily determined. In fact, it is possible to have left-handed screws which are traditionally used for coffins (McManus, 2002). On the other hand, the right-handed rule for magnetic fields and definition of neutrinos as left-handed are merely conventions. Physicists could also define electric field as a pseudo-vector and magnetic field to be a vector (Griffiths, 2004). Similarly, physicists could have renamed neutrinos as anti-neutrinos and vice versa, thus changing their handedness. However, an interesting question now is whether right-handed neutrinos can be detected, and thus, they exist not only in the mirror world but also the real world.

       In summary, we may describe the nature of a plane mirror image as “front-back reversed,” “left-right reversed,” and “top-bottom reversed.” The descriptions of the plane mirror image are dependent on the “handedness of an object,” and the “axis of the mirror.” However, we should understand Feynman’s reasonings pertaining to the concept of “front-back reversed,” “handedness of an object,” and “definitions of left and right.”

Note
In an article titled Theory of the Fermi interaction, Feynman and Gell-Mann (1958) state that “only neutrinos with left-hand spin can exist (p. 195).”

References:
1. Ansbacher, T. H. (1992). Left-Right Semantics. The Physics Teacher, 30(2), 70.
2. Cutnell J. D., & Johnson, K. W. (2004). Physics (6th ed.). New Jersey: John Wiley & Sons. 
3. Feynman, R. P. (1965). The character of physical law. Cambridge: MIT Press.
4. Feynman, R. P. (1994). No Ordinary Genius - The Illustrated Richard Feynman. New York: W. W. Norton and Company.
5. Feynman, R. P., & Gell-Mann, M. (1958). Theory of the Fermi interaction. Physical Review, 109(1), 193-198.
6. Feynman, R. P., Leighton, R. B., & Sands, M. (1963). The Feynman Lectures on Physics, Vol I: Mainly mechanics, radiation, and heat. Reading, MA: Addison-Wesley.
7. Galili I., & Goldberg F. (1993). Left-right Conversions in a Plane Mirror. The Physics Teacher, 31(8), 463466.
8. Giancoli, D. C. (2005). Physics: Principles with Applications (6th ed.). Upper Saddle River, N. J.: Prentice Hall.
9. Griffiths, D. (2004). Introduction to Elementary Particles. Weinheim: Wiley-VCH.
10. Kelvin, W. T. (1894). The molecular tactics of a crystal. Oxford: Clarendon Press.
11. Knight, R. D. (2004). Physics for Scientists and Engineers with Modern Physics: A Strategic Approach. Boston: Addison Wesley.
12. Lee, T. D. & Yang, C. N. (1956). Question of parity conservation in weak interactions. Physical review, 104(1), 254258.
13. McManus, C. (2002). Right Hand, Left Hand: The Origins of Asymmetry in Brains, Bodies, Atoms and Cultures. Cambridge: Harvard University Press.
14. Muncaster, R. (1993). A Level Physics (4th ed). Cheltenham: Nelson Thornes.
15. Tabata, T., & Okuda, S. (2000). Mirror reversal simply explained without recourse to psychological processes. Psychonomic Bulletin & Review, 7(1), 170–173.
16. t’Hooft, G. (1997). In Search of the Ultimate Building Blocks. Cambridge: Cambridge University Press.
17. Tipler, P. A., & Mosca, G. P. (2004). Physics for Scientists and Engineers (5th ed.). New York: W. H. Freeman.
18. Tomonaga, S. (1965). Kagaminonaka no sekai [The world in the mirror]. Tokyo: Misuzu-Shobo.
19. Wu, C. S., Ambler, E., Hayward, R. W., Hoppes, D. D., & Hudson, R. P. (1957). Experimental test of parity conservation in beta decay. Physical review, 105(4), 1413-1415.