Required math: none
Required physics: none
Given that quantum theory is still, after more than a century, very difficult to understand, an obvious question is why the theory was ever proposed in the first place. In this post, we’ll have a look at some of the things that went wrong with classical physics towards the end of the 19th century, and why the idea of quantum physics fixed these problems.
The first indication that quantum phenomena are important in the description of matter and energy was really that matter itself is composed of atoms. Although the idea that atoms were the fundamental unit of matter seems to have occurred first to the ancient Greek philosopher Democritus around 400 BC, the modern theory of the atomic structure of matter can be dated to the work of John Dalton, around 1800. The experimental evidence supporting Dalton’s ideas was based on the fact that in chemical reactions, the constituents always combined in strict ratios of masses. So, for example, if you mix hydrogen and oxygen, the ratio of the amount of hydrogen and of oxygen that combined to make a certain amount of water was always the same. You couldn’t just throw together any amount of hydrogen and oxygen and set the reaction off with a spark and expect all of the two gases to combine to make water. There was a fundamental difference here from the process of, say, dissolving sugar in water. In the latter case, you could put any amount of sugar (up to some maximum, where saturation occurred) in water, stir, and the sugar would all dissolve. With hydrogen and oxygen, two parts of hydrogen would always combine with one part of oxygen to make one part of water.
Dalton proposed that if the constituents of a chemical reaction consisted of indivisible pieces called atoms, the simple numerical ratios observed in reactions could be explained. From then until near the end of the 19th century, however, the atomic theory served as more as a working hypothesis than established fact. As more evidence piled up, physicists became more and more certain that the atomic theory was correct. However, most of them did not believe that explaining the behaviour of atoms would required any revisions in classical physics. How wrong they were.
One of the key experiments that supported the need for some revision of physics was the Geiger-Marsden experiment in 1909. (This experiment is often known as the Rutherford experiment, and as such is an early example of a supervisor (Rutherford) taking credit for work done by his students and research assistants.) At the time, the prevailing model of the atom was the plum pudding model proposed by J. J. Thomson, the discoverer of the electron, in 1904, since it was believed that an atom consisted of a number of negatively-charged electrons (the plums) embedded in a distributed ‘pudding’ of positive charge.
The Geiger-Marsden experiment consisted of firing alpha particles (the nuclei of helium atoms, although that wasn’t realized at the time) produced by radioactive decay through a very thin gold foil. Although the actual composition of the alpha particles wasn’t known, it was known that they were positively charged. The idea of shooting positive charges through the foil was to measure how many alpha particles got deflected at various angles, and thus hopefully to learn something about the distribution of charge within the plum pudding. Since the positive charge within the atoms was believed to be smeared out, it was expected that the alpha particles would, for the most part, pass through the gold foil with only slight deflection, depending on density variations within the matrix of positive charge.
The actual result of the experiment was a complete surprise. While most of the alpha particles did indeed pass right through the foil with essentially no deflection, a few of them had a very large deflection, with some of them reflected almost 180 degrees. Rutherford correctly interpreted the results as indicating that atoms consisted mostly of empty space, with the positive charge concentrated in a very small space at the centre.
As a result, he proposed a model of the atom that is still seen today, even though it has long been known that it is not correct. Inspired by the solar system, Rutherford proposed that an atom consisted of a dense, positively-charged nucleus with the electrons orbiting around the nucleus. The gravitational force holding the planets in their orbits was replaced by the electrical attraction between the positive nucleus and the negative electrons.
There was, however, a major problem with this model. By 1911, when Rutherford published his model of the atom, it was known that any accelerating charge generates electromagnetic radiation, and thus loses energy. Remember that acceleration is any change in velocity, either in magnitude (speeding up or slowing down) or direction (so you are accelerating even if your speed is constant, but you are steering around a corner). Thus proposing that the electrons are in some orbit around the nucleus means that they are accelerating, so according to classical physics, they must radiate away their energy and collapse into the nucleus.
Since this clearly didn’t happen, Rutherford proposed that electrons could occupy only certain fixed orbits, and that these orbits were stable in the absence of any external disturbance. Since each orbit in the electron has a definite energy, Rutherford was essentially proposing that the energy levels of electrons with the atom were quantized. Although we know now that electrons don’t move in actual orbits, his original idea that energy was quantized has since proved to be true.
The photoelectric effect
By 1900, it was known that shining electromagnetic radiation (usually ultraviolet light or X-rays) onto certain metals caused them to emit electrons. The number and energy of the emitted electrons could be measured, but the results were counter-intuitive. Increasing the intensity of the radiation but leaving its frequency unchanged caused the number of electrons emitted to increase but not their energy. However, increasing the frequency of the radiation (moving more towards the X-ray end of the spectrum) did cause the energy of the electrons to increase. This seemed odd, since you might expect that increasing the intensity of the radiation would have a similar effect to that of hitting something harder with a hammer; in other words, you would expect more energy to be transmitted to the electrons so they would be ejected with higher energies.
The explanation of the photoelectric effect was given by Einstein as one of his 1905 set of papers (another of which proposed the theory of relativity). Einstein said that if electromagnetic radiation was quantized (that is, the radiation came in discrete packets rather than as a continuous beam), and the energy of each packet was proportional to the frequency of the radiation, then the photoelectric effect would behave exactly as observed. Since the radiation is essentially light, all the quanta would travel at the speed of light (which Einstein had conveniently postulated was an absolute constant in his paper on relativity), and increasing the intensity of the light meant simply that more packets were being transmitted, but each packet still had the same energy. Thus you would expect more electrons to be emitted, but their energies would all be the same as before, which is just what was observed. Increasing the frequency of the radiation, however, would increase the energy of each packet, causing higher energy electrons to be emitted, again agreeing with observation.
Other phenomena also showed the inadequacy of classical physics to explain the behaviour of matter, but the examples above should show that there was strong motivation for a drastically new type of theory. Things simply did not behave as they should on the atomic scale. It is hard to picture now just how radical the ideas proposed by the quantum pioneers were a century ago. Most of the ideas in quantum theory were wild speculations at the time they were made, and were only subsequently given mathematical underpinnings and experimental verification.