Nuclear Fusion, Episodes I and II

The series on Fusion was very long and released throughout 2019. All of the episodes can be accessed here:

Nuclear Fusion: Darwin and the Sun

In 1859, Charles Darwin published “On the Origin of Species”, the book that first set out what has developed into the modern theory of evolution. As discoveries go — well, not for nothing have people called this the greatest idea that a person ever had. He was famously inspired by a trip to the Galapagos islands, where he observed a dozen different species of finches on each of the individual islands. Perhaps less famously, he was influenced by Malthus, as well. Loyal listeners will recall that we discussed the question of “overpopulation” and the “Malthusian catastrophe”, where human demands on the Earth outstrip the availability of resources, as part of our series of episodes on the apocalypse. The bleak, Malthusian view of the world — published over a decade before Darwin was born — was influential for Darwin when he was trying to figure out just why it was that so many different species seemed to exist, to appear and disappear, and to change. At this time, he already had an idea that “there is a force like a hundred thousand wedges, trying to force every kind of structure into gaps of nature… or thrusting out the weaker ones.”

It was then that he happened to read Malthus, and noted:

“In October 1838, that is, fifteen months after I had begun my systematic enquiry, I happened to read for amusement Malthus on Population, and being well prepared to appreciate the struggle for existence which everywhere goes on from long-continued observation of the habits of animals and plants, it at once struck me that under these circumstances favourable variations would tend to be preserved, and unfavourable ones to be destroyed. The result of this would be the formation of new species. Here, then, I had at last got a theory by which to work…”

The genius of this idea is probably that it’s so simple — if there’s a population that varies a lot — like animals and plants — and a selection process — the ability to survive and reproduce — then you’d expect those with favourable characteristics to be selected. Then if they pass on their characteristics to new generations, the population will gradually change to have these favourable characteristics. This doesn’t just apply to biology; even now, this process is being used by those in the evolutionary AI field to generate better algorithms by tweaking parameters and selecting the ones that work. It’s so simple as to almost seem obvious, yet so profound — it can explain a vast array of phenomena in the world that previously confounded thinkers. The way things were so adapted didn’t just need intelligent designers to explain it: it almost provided the main evidence that there was an intelligent creator.

But this profound idea had profound consequences. Darwin was reluctant to do more than hint at it in his first book, but the obvious implication was that humans had themselves evolved — and perhaps from animal ancestry. The timescales required for evolution meant that the Earth must be hundreds of millions of years old, at least. It was not impossible to reconcile the theory of evolution with the existence of a creator who set it all in motion — that was possibly the way that Darwin felt, although he described himself as agnostic. But it was certainly impossible to reconcile it with a literal reading of Genesis, where Earth is created in a week in 4004BC and God painstakingly designs every single species that exists… and, I guess, chucks some dinosaur skeletons in the ground to test our faith. More than the religious dogma that evolution contradicted, though, it was another great blow to our traditional view of ourselves. Copernicus and others had pointed out that the Earth wasn’t at the centre of the Universe; now, Darwin was saying that the human species wasn’t particularly special.

We’d gone in a few centuries from being created in the image of the divine God, the Universe revolving around us… to smart apes occupying an evolutionary niche on a not-particularly-special planet in an obscure corner of a vast Universe. It’s bound to be a blow to the ego when you look at it like that.

These consequences upset and disturbed Darwin too; he held off from publishing the work for years, and in his letters at the time of publication, he begged friends to see if there wasn’t merit in his theory and warned them that they might not like it. And, of course, because it’s science, people were immediately trying to take him down and pick apart his idea. And this is where the title of this episode comes in, because one of the scientists who tried to contradict Darwin presented a little paradox that seemed to make evolution impossible. It was the paradox that has occurred to every curious child: why does the Sun shine?

It wasn’t a child trying to take Darwin down, though, but Lord Kelvin. He was most famous for his contributions to thermodynamics, which is why they named the scale of absolute temperature after him, and I’m sure loyal listeners will recall we talked about him in the Second Law episode. So it’s natural that he took a thermodynamic approach to the problem of why the sun shone, for which you need to answer — where does the energy come from?

First Kelvin dismissed the possibility that the Sun might be burning some kind of fuel — no known fuel would provide you with that amount of energy for that long before burning out. If the Sun was made of something burning, its fuel must contain thousands of times more energy than anything known on Earth. Then he considered that perhaps the Sun’s fuel was continually replenished by meteorites, but this quickly became untenable, too; there just weren’t enough meterorites crashing into the Sun to refuel it.

So he was left with the conclusion that when the Sun had formed, it had a lot of energy; it was now radiating that energy away into space and cooling down. But where did that original energy come from? Kelvin thought that the energy that the Sun radiated came from a gravitational collapse. In many ways, you can see why he thought this– most of the astrophysical things we see are sculpted by gravity, and it seems to be the dominant force on those length-scales. So the idea was that all of the energy that was constantly radiating from the Sun came from gravity — from when the sun originally collapsed. As the stuff that formed the Sun collapsed into a star, gravitational potential energy was being converted into kinetic energy; and this was then radiated away from the Sun as heat. Since you can calculate how much gravitational potential energy is released when a sphere forms — it’s actually a really standard physics calculation — Kelvin could tell you how much energy the Sun should have from collapsing.

Since the Sun was clearly radiating away its energy at an astonishing rate, Kelvin reasoned that it must be cooling down. Based on distance measurements to the Sun and its brightness, Kelvin could obtain estimates for how hot the Sun had been when it first formed, and, using its current temperature, how long it had been cooling down for. He did so, and came up with an estimate for the Sun’s lifetime that was between ten and a hundred million years, with a best guess of 32 million years.

Let’s just take a second to appreciate that Kelvin’s theory is also rather doomy. The Sun is constantly cooling down, burning through its reserves of energy. In this model, all the matter in the Universe eventually clumps together into stars, which then radiate away that energy into the Universe, until everything is lukewarm soup and lukewarm stars.

But this theory spelled doom for Darwin. Kelvin reasoned that the Sun was probably the same age as the Earth — true — and even if they somehow didn’t form together, life is hardly going to flourish with no Sun. This meant that there had only been a few tens of millions of years during which evolution could possibly have taken place; simply not enough time for Darwin’s theory to be right.

You get a sense of the age-old animosity between physicists and biologists, here. There’s nothing worse than a physicist who’s just done a back-of-the-envelope calculation to try to disprove your theory. One can almost hear Kelvin saying: “Yes, Charles, that’s a very cute theory that you figured out by looking at the finches, and it’s a lovely story for your book, but it contradicts the Laws of Thermodynamics! You can’t break the Laws of Physics, so pipe down.” And the fact that geologists had also estimated the Earth to be hundreds of millions of years old, based on how quickly rocks were eroding, didn’t seem to faze Kelvin either. Darwin had considered this to be such an important aspect of his theory that he had done his own geological studies, demonstrating that the amount of time required to form a particular valley in England must have been at least 300 million years. But Kelvin thought that perhaps catastrophic floods could have caused much faster erosion. Besides, what would the people who actually studied the Earth know about the Earth?

Kelvin was even more smug when he was able to turn the thermodynamic arguments towards the Earth. It was known at the time that the Earth is filled with molten rock; well, by the same logic, presumably it was formed as a ball of molten rock that had gradually begun to cool down. By considering how long that cooling would take, he was able to get another estimate of the age of the Earth that was tens of millions, rather than hundreds of millions of years.

We now know that Kelvin’s calculations didn’t include everything. Those truly devoted listeners who remember the very first episodes of this show, about stellar formation, will remember this idea about gravitational potential energy from the collapse heating up a star. This is what happens in protostars — those newly-born stars that are really just masses of gas falling and spiralling inwards, getting hot and heavy! And we also know that many stars are just endlessly cooling, radiating energy away into space and getting dimmer: white dwarf stars, where fusion has already stopped. But we mark the moment when the star is truly born when it gets another source of energy, beyond just gravitational potential from infalling material. A star is born when that energy gets hot enough, and the gravitational pressures big enough, to ignite nuclear fusion in the heart of the star. This was the mysterious fuel that Kelvin couldn’t quite put his finger on. The energy came from the nuclear forces — the extraordinary amounts of energy that can be liberated when nuclei rearrange themselves.

We now know that the Earth is 4.54 billion years old — plenty of time for evolution to develop the vast array of species we see around us, and survive a few mass extinctions and a boring billion years along the way. Darwin and Kelvin are both dead, but they’re both famous in the annals of scientific history, so I suppose there’s no love lost there.

Yet now there was a new and tantalising prospect. Just imagine the field of radiation and nuclear physics developing today. Within [a few decades], you go from having no knowledge of radioactivity — no one having any conception that the atom had a nucleus, and the theory of atoms still controversial — to realising that the mysterious forces that bind the nucleus holds almost limitless energy, lights up the Universe. Imagine the dreams we’d have if such a discovery was made today! Shifting nuclei into that region of stability, where the binding energy increases when you fuse nuclei together. If there was enough energy to light up the stars and the night sky, across countless light years, sending twinkling signals into the void… could we perhaps harness this energy on Earth?

By the 1920s, it had become clear that Kelvin’s argument was wrong — the Sun must be drawing on some energy source, rather than gradually smouldering to nothing. Arthur Eddington — the man behind the experimental expedition that confirmed Einstein’s theory of general relativity — is generally credited with being the first one to declare that nuclear energy was powering the sun. It would fall to Bethe and other scientists to work out precisely how it worked, the exact nuclear fusion reactions that were occurring, and hence to estimate things like the stellar lifetime and talk about the various phases of nucleosynthesis — the creation of all the elements around us, as we’re built out of star-stuff. For more of this, of course, head way back to our first ever episodes, Hot and Heavy.

I’m going to quote Eddington, however. The whole speech is great — you can find it all online, I did at Andrew Hamilton’s homepage at the University of Colorado. The talk is called “The Internal Constitution of Stars”, and it’s a pretty amazing snapshot of the state of stellar physics in 1920.

First off, Eddington starts by summarising the state of stellar physics and observational science. Then he talks about Kelvin’s predictions and why they don’t make any sense:

This study of the radiation and internal conditions of a star brings forward very pressingly a problem often debated in this Section. What is the source of the heat which the Sun and stars are continually squandering? The answer given is almost unanimous-that it is obtained from the gravitational energy converted as the star steadily contracts. But almost as unanimously this answer is ignored in its practical consequences. Lord Kelvin showed that this hypothesis, due to Helmholtz, necessarily dates the birth of the Sun about 20,000,000 years ago; and he made strenuous efforts to induce geologists and biologists to accommodate their demands to this time-scale. I do not think they proved altogether tractable. But it is among his own colleagues, physicists and astronomers, that, the most outrageous violations of this limit have prevailed. I need only refer to Sir George Darwin’s theory of the earth-moon system, to the present Lord Rayleigh’s determination of the age of terrestrial rocks from occluded helium, and to all modern discussions of the statistical equilibrium of the stellar system. No one seems to have any hesitation, if it suits him, in carrying back the history of the earth long before the supposed date of formation of the solar system; and in some cases at least this appears to be justified by experimental evidence which it is difficult to dispute. Lord Kelvin’s date of the creation of the Sun is treated with no more respect than Archbishop Ussher’s.”

Eddington finally takes the side of other scientists, biologists and geologists by pointing out the hypocrisy of the physicists! They were happy to claim that stars were continually contracting, but also to contradict themselves by not really dealing with the age limit that this would imply. But you have to remember — this is 1920; Einstein has already published special relativity, and people accept that light has a finite speed. So this means that they know that, when they’re looking at faraway stars, they’re actually looking back in time. The reason is simple: if the light takes 20,000 years to get to you, the object being 20,000 light years away, then you’re seeing light that was emitted from the object 20,000 years ago. Whenever you look anywhere, you’re looking at the past: you can never see the world as it is now: the further you look, the further back into the past you see. In some ways, the present moment for you is defined by all the events in your “past light cone” — the region of spacetime that can influence you. But this is for the special relativity episodes.

Usually this effect is small, but with astronomical distances, it becomes important. Eddington pointed out that we can look at arrangements of stars called globular clusters at various different distances — and hence various different times. If stars are being fuelled by constantly contracting, you’d imagine further away clusters to have a stars that were bigger on average — they’ve had less time to contract! Yet there didn’t seem to be much difference in these clusters. There are also stars called Cepheid variables which are famous because they regularly pulsate — they get brighter and then darker again. Eddington pointed out that the period of this pulsation, the amount of time between pulsations, should change if the star was constantly contracting as it burned up all of its gravitational energy. But one particular star had been observed since 1785, and the period had barely decreased at all — by hundreds of times less than it should have done. Another nail in the coffin of this contraction theory.

So, finally, he comes on to discuss what the actual source of the stellar energy is. He knows about E = mc², the equivalence of mass and energy. It’s also known at this point that helium is slightly lighter than “4 hydrogen atoms” (which is what they thought helium was made of at that point.) This is how you can obtain energy from nuclear fusion — when the constituent parts combine, they’re lighter than the sum of the parts, and the difference in mass is the energy released. [We’ll talk about why in the next episode.] Eddington can then do a back-of-the-envelope calculation, working out how much energy might be released if the Sun is made of fusing hydrogen — and he gets a figure for the Sun’s lifetime that’s far longer, and far closer to what everyone suspects!

But I think it’s amazing that in this — perhaps the first public statement talking about fusion as the energy source that powers stars — we’re already discussing it as an ideal source of energy for the human species. Eddington realised straight away the immense potential of what this could mean.

A star is drawing on some vast reservoir of energy by means unknown to us. This reservoir can scarcely be other than the sub-atomic energy which, it is known, exists abundantly in all matter; we sometimes dream that man will one day learn how to release it and use it for his service. The store is well-nigh inexhaustible, if only it could be tapped. There is sufficient in the Sun to maintain its output of heat for 15 billion years.

The nucleus of the helium atom, consists of 4 hydrogen atoms bound with 2 electrons. But Aston has further shown conclusively that the mass of the helium atom is less than the sum of the masses of the 4 hydrogen atoms which enter into it. There is a loss of mass in the synthesis amounting to about 1 part in 120, the atomic weight of hydrogen being 1.008 and that of helium just 4. …. We can therefore at once calculate the quantity of energy liberated when helium is made out of hydrogen. If 5 per cent of a star’s mass consists initially of hydrogen atoms, which are gradually being combined to form more complex elements, the total heat liberated will more than suffice for our demands, and we need look no further for the source of a star’s energy.

If, indeed, the sub-atomic energy in the stars is being freely used to maintain their great furnaces, it seems to bring a little nearer to fulfillment our dream of controlling this latent power for the well-being of the human race — or for its suicide.”

Amazingly prescient. The first time people talk about fusion as the power source for the Sun, he’s already seeing this dichotomy with nuclear energy — is it going to be the energy source that helps the human species ascend to new heights… or to destroy itself?

So present in this speech already is a dream that one day, we might be able to harness the miraculous, limitless-seeming source of energy from the fusion of light nuclei. It’s this idea that I really want to explore in the next series of episodes. Could it really be true? It’s going to take us through some incredible historical moments, some triumphs, some tragedies, and some scandals; from a century in the past to decades in the future.

Because there’s another famous quote about fusion.

“The idea is simple. You put the sun in a bottle. The only problem is building the bottle.”

Nuclear Physics — Rutherford’s Atom

I want to tell a story about a new technological era. Initially, the discoveries that were made were only of interest to a few specialists. Many people might have considered the research to be something bizarre, inexplicable, or too theoretical to impact their everyday lives. A few visionaries — or crackpots — suggested that, one day, this fundamental research might change the world — transform it from its present state into a paradise, with limitless energy, where humans could achieve incredible things. They were mostly ignored. But soon this changed. As the discoveries mounted, and people began to realise that this new technological force could be harvested — not just in science fiction, but for profit, not just to change the world as it was, but to create terrifying weapons of war and wield incredible power — people began to talk of entering a new era for the human species. An era so radically different from the present day. An era that would either lead the earth on the transformational path to a techno-utopia — or the shorter one, to a flaming, poisoned wreckage.

I want to tell you this story. But first, I want to highlight some of the characters who you might not hear about. In history, starting in the 19th century, there has been an ongoing debate about two alternative interpretations. One of them is the “great man theory”, and we’ll keep the misogyny because it’s an old-fashioned idea. This is the concept that the course of history is shaped by charismatic individuals — leaders who end up with large followings, who bend events to their will due to their charisma or talents. You can name some such individuals, of course. Julius Caesar. Jesus. Alexander the Great. Genghis Khan. Muhammad. Charlemagne. Napoleon. This is pretty Euro-centric, but most cultures and histories have such revered, titanic figures. In the modern era, their reputations can be a little more patchy; perhaps we remember better the brutalities of Stalin, Hitler, Mao. It makes a nice story; and it actually gives you a narrative framework for covering history that’s more convenient. Following all of the intricate threads of history — the interactions and interplay between various groups, trends, and forces; the relative importance of religion, economics, and so on — this is tricky to do. Maybe we as people can relate more to the story of an individual; seeing the world through their lens. And so, often, the history of Rome becomes the history of the lives of the Roman Emperors; the history of the United States becomes the story of each Presidential administration.

But there is of course a dissenting theory — that trends and forces are the most important. Like the anthropic principle, there are weak and strong versions of this theory. I guess the strong version of the theory suggests that the historical circumstances of the time make particular personalities inevitable. The German economy was in ruins during the Great Depression; they were bitter after the Treaty of Versailles had imposed a humiliating peace; some populist, aggressive, nationalist leader was bound to take control. If you had a time machine and killed Hitler, someone else would inexorably fill his place, and the war would have unfolded before. Individual humans are almost just pieces on a chessboard, or rational actors in some vast game theory set of equations. They might seem to be kings, when we pawns look at them. But they’re moved by forces larger than themselves. That’s the strong version. Perhaps in the weaker version, we argue that great personalities do exist, but they can only seize control of events when the trends and forces align correctly. After all — it seems almost ridiculously deterministic to say that Napoleon was destined from birth to change the face of Europe. If there was no French Revolution in 1789–93, how would he ever get his chance? And, similarly, this avoids the worrying thought that every great person goes onto shape the course of history. Some of them probably just end up being happy, instead.

As in history, so in the history of physics. It’s easy to point to individuals who changed the course of how we think. Aristotle. Galileo. Newton. Maxwell. Einstein. The list goes on and on. It’s true that some individuals make truly outstanding contributions to our knowledge of the Universe; that, in this specific realm, some people seem to have immense scientific gifts. There are scientists who are revered in hushed tones as being born with some unnatural genius. A scientific mind that arises once in a generation — once in a century, even. If anything, the so-called “Great Man theory” pervades our understanding of how science and technology develops more than it ever pervaded history. I’m guilty of this too, of course, with episodes on Newton already out, and more on Einstein sure to come. In the end, narrating history this way — through the story of these remarkable humans — is just too tempting.

Yet, at the same time, as any of these physicists would be the first to admit — it is very rare that you are not shaped by the science that has come before you. You’re building on a vast edifice of other people’s discoveries; your insights, your breakthroughs, are often impossible without the legwork that was done by previous geniuses. Not just previous geniuses, though; previous, unregarded, unnoticed, perhaps even completely forgotten figures from the history of science. Even if you develop a theory that’s almost completely unique to you — like some of those who discovered quantum mechanics, or Einstein and relativity, could perhaps claim to do — you’re building on the theoretical problems that you understand in the framework of a previous era. Physics is broken; there are these discrepancies; there is this piece of mathematics lying around — and all of the groundwork is laid for someone to come along — get a little lucky in the path they choose to take -make incredible advances, and take all the credit. The work done by the forgotten scientists — not just the ones who made it possible, but everyone who fruitlessly spent years confirming that a dead end truly is a dead end — is often unappreciated.

In the spirit of remembering the forgotten martyrs of physics, then, I’ll tell this story a little differently. In the early 1900s, there were two physicists called Geiger and Marsden. They worked under Ernest Rutherford, a hot-shot professor who had just won the Nobel Prize for Chemistry (which always annoyed him, because he thought of himself as a physicist). Rutherford had won the Nobel Prize — not only for discovering radioactivity took multiple forms, which he called alpha and beta particles, but also that the radioactive decay of elements involves one element transforming into another. The dream of the alchemists — that you could transmute elements into each other — had finally been realised — but it only applied to radioactive elements, and sometimes you had to wait thousands of years for the transformation to take place. Physics has a way of biting you on the bum.

We now understand that an alpha particle is pretty much the same thing as a helium nucleus; two protons, and two neutrons. That means it’s charged, which is important — both for the experiment I’m about to describe, and its high charge also explains why it can ionise other atoms when it interacts with them, tearing their electrons away. This is why alpha-emitters can cause immense damage to the body — and why the Russians used polonium-210, an alpha emitter, to kill Alexander Litvinenko. Luckily for most of us humans, alpha radiation can’t penetrate all that far through matter, so you should be safe as long as you don’t touch it and there’s minimal shielding involved. (or spies don’t feed it to you). We now know that when a radioactive alpha-emitter element’s nucleus is unstable, at some point, the alpha particle escapes via quantum tunnelling — that miniscule probability that it happens, by quantum chance, to be just outside the energy barrier that’s holding the atom together — and it’s emitted from the nucleus. You’re left with a new nucleus — a different configuration of protons and neutrons — and, hence, a new element.
But they didn’t know this at the time; no-one even knew that the nucleus existed, although a similar model was suggested by Japanese physicist Hantaro Nagaoka. Instead, the scientific consensus held that the atom was like a “plum-pudding” — electrons, the recently-discovered negatively charged particle, were embedded in a sphere of positive charge. This allowed the atom to be overall charge-neutral, so it wouldn’t interact as strongly as charged particles do with electric fields, while also containing electrons.

At any rate, Rutherford decided to use his recently discovered alpha particles to probe the structure of matter, and it’s here that he enlisted the help of poor Geiger and Marsden. They regularly used to turn off all the lights in the lab, until it was pitch black, and then sit there, eyes open, for half an hour.

Their eyes needed to adjust to the dark. The task that these physicists had was to watch a thin screen made of zinc sulphide. They were staring intently at the screen for tiny flashes of light. Whenever they saw one — and occasionally there were up to 90 a minute, or even more — they had to record the location of the flash, and the number of flashes that they’d seen. It was such a strain on the eyes and concentration that you could only achieve this for short bursts of a minute at a time before being overwhelmed by the flashes, these little pinpricks of light in the pitch darkness of the laboratory. Over the course of years, they recorded data from hundreds of thousands of tiny flashes. It’s no surprise that one of them would later spend many years developing an automatic detector — the Geiger counter — for just what he was trying to observe manually, by hand; so that no one ever had to go through that pain again.

So whenever you see a cool result in physics, or appreciate the benefits of modern technology, just think of all of the poor graduate students who had to suffer to bring you that information. Whenever you turn on a light-switch in France — think of poor Geiger and Marsden, counting their tiny flashes. Each flash was an alpha particle interacting with the zinc sulphide screen; this was the only way they could count them. This was the famous Rutherford gold foil experiment (or, in the spirit of giving them credit, the Geiger-Marsden experiment.) It was one of the pivotal moments in our understanding of the atom — and, perhaps, the birth of nuclear physics.

A common misconception about the Geiger-Marsden experiment was that the real revelation was that most of the alpha particles passed straight through the gold foil. This is how it’s sometimes presented: “Amazingly, most of the alpha particles passed straight through, showing that the atom was mostly empty space!” In fact, the current models of atomic physics suggested that the alpha particles should fly straight through the foil. Remember, they still had their plum-pudding model; and this is your ideal physics experiment; you have some idea of what you’re expecting to find, preferably with some specific calculation or number to check. If you find out you’re correct, the existing theory survives another test. If you find out your calculation disagrees with experiment, then you get incredibly excited, check the experiment a thousand times, 99.9999999% of the time you find you’ve made a stupid mistake in setting it up that’s giving you a ridiculous value, but 0.0000001% of the time you’ve discovered new physics! Fame, fortune, and Nobel prizes all around!

Anyway, such was the setup here. The charges were known, and so the electric fields could be calculated; they had Maxwell’s equations that described the theory of electromagnetism, and so they could calculate exactly how much the alpha particle, with its positive charge, would be deflected — and they thought that even a close interaction with an atom should only deflect the alpha particle by a tiny fraction of a degree. Even if the alpha particle found its way entirely through the gold foil that Geiger and Marsden were using, interacting with every atom along the way, it should only be deflected by a few thousandths of a degree — and most alphas should pass straight through with no measurable deflection.

Instead, they saw that some alpha particles — a tiny fraction — were deflected through angles of more than 90 degrees — some were deflected entirely by the thin gold foil. They realised that the electric field strength required to totally deflect an alpha particle were huge. After all these are hefty beasts with a measurable mass moving at quite some speed they were firing at the foil. And this totally wrecked the concept of a plum-pudding model for the atom; you needed a very strong electric field, concentrated across a tiny area, to make this work.

The electric field on a sphere of charge depends on the radius of that sphere, and the amount of charge. If the same amount of charge is smeared across a larger sphere, the electric field will be smaller — the charge is more spread out, and the electric field strength really measures the gradient, the difference in the force as you move from place to place. The only way they could get the kind of results they were seeing is if the atoms contained a tiny, positively charged nucleus. That way, the vast majority of alphas would pass straight through the foil, while a tiny fraction would pass close enough to the strong electric fields surrounding the nucleus that they’d be deflected by these large angles — what you might call a head-on collision. To be electrically neutral overall, the electrons would have to somehow orbit around this nucleus. This was the new model of the atom, and probably the one we all picture when we try to picture an atom (although often, in our mind’s eye, the size of the nucleus is exaggerated — its radius is something like 1/10,000th the radius of the whole atom, as a rule of thumb.)

This is a classic physics experiment due to the simplicity of the setup, the richness of the inference, and of course the fact that it disproved an old model while presenting a shiny new one to investigate. Rutherford later described his surprise at the experiment by saying:

“It was quite the most incredible event that has ever happened to me in my life. It was almost as incredible as if you fired a 15-inch shell at a piece of tissue paper and it came back and hit you. On consideration, I realized that this scattering backward must be the result of a single collision, and when I made calculations I saw that it was impossible to get anything of that order of magnitude unless you took a system in which the greater part of the mass of the atom was concentrated in a minute nucleus. It was then that I had the idea of an atom with a minute massive centre, carrying a charge.”

As soon as Rutherford had the idea, he realized that the results they’d seen were easier to explain if the nucleus was tiny — and the deflection was caused entirely by an inverse-square force law; deflections via the electric field of the nucleus. You can calculate the scattering angles that you’d expect for an alpha particle passing close to a nucleus — in fact it’s a calculation they made us do in undergraduate physics — and, lo and behold, the theory of a dense positively-charged nucleus in the centre of the atom with orbiting electrons gave the correct answer, and the plum-pudding model ended up cast into the dustbin of history. The system works!

Rutherford would later make other great contributions to nuclear physics, the field that he helped to discover — because the model was far from finished. Later experiments conducted by Rutherford used alpha particles to convert nitrogen into oxygen — a process that emits a proton, or a hydrogen nucleus, same thing. In the process, Rutherford realised that — since all atoms had masses that were roughly multiples of the proton mass, and charges that were multiples of the proton charge — the proton must be the building block of the nucleus. This helped explain the masses and charges, and the process by which elements could be turned into each other — nuclear reactions, where the nucleus lost or gained protons from the alpha particle bombardment — but it created more problems. If protons could be knocked out of the nucleus, and the nucleus was made up of many positively charged protons, what stopped their electrostatic repulsion from tearing the nucleus apart? This required the idea of the strong nuclear force; and with it, another entirely new branch of physics. Rutherford was one of those who proposed a new particle — the neutral electron, or neutron — which sat in the nucleus and helped to bind together the protons. And, in 1935, a few years before Rutherford died, he would get to see his scientific colleague James Chadwick win a Nobel Prize for proving his neutron theory correct by discovering the neutron in an experiment. Even then, all of the questions provoked by Rutherford’s discovery hadn’t been resolved: after all, if the electrons are “orbiting” the positively charged nucleus as you’d expect due to their mutual electrostatic attraction, there was a problem. Accelerating charges radiate away their energy; it’s how electromagnetic radiation is formed, after all. And the electrons in orbit around the nucleus were constantly accelerating, as anything does when it’s changing direction in an orbit: the electrons should have been emitting all of their energy and spiralling into the nucleus in a very short amount of time. A model of the atom that doesn’t allow for any atoms to exist for longer than a few seconds is obviously not ideal. This is a problem that would require quantum mechanics to solve; and that’s for another day.

Given the huge wealth of physics that was opened up by the gold foil experiment, it’s no wonder that it’s gone down in the annals of history as one of the most famous experiments ever conducted, revolutionising our understanding of the fundamental building-blocks of matter — and in an elegant, simple-to-explain way. Rutherford expressed shock at the results of the experiment, but he might have been even more surprised if he’d had any inkling what the discovery of this nucleus could possibly mean, how it would not only open up whole new branches of physics but come to change the course of history; that it would create both incredible hope and incredible fear for the human species. Next episode, we’ll skip forward in time to explain the “liquid drop model” for the energy of a nucleus — which will hopefully then allow us to explain why joining nuclei in fusion and splitting them in fission can both, depending on the nucleus, liberate energy. Join us then.

Thank you for listening…