Nuclear Fusion: The Semi-Empirical Mass Formula

Physical Attraction
20 min readOct 15, 2020

Episode III of the Nuclear Fusion series. The whole series can be accessed here.

There’s a famous saying that’s attributed to the statistician Henry L Box — but it’s one of the most important sayings for thinking about physics, and science in general.

“All models are wrong, but some models are useful.”

What does this mean?

Well, a model is a simpler representation of a more complex system. Maybe you’re simplifying it because you suspect in advance that it’s going to be safe to ignore certain physical processes that are going on. Perhaps you know in advance that something is very likely to be very small compared to what you’re interested in. For example, photons from the sun actually exert a tiny pressure on everything they hit. If you’re designing a lightweight probe that’s going to head towards the Sun, you might need to worry about this. If you’re designing a bridge, you don’t need to be concerned about the difference in pressure due to radiation during day and night.

Other times, it’s because things are unknowable, or irrelevant. Take a classic example from thermodynamics — atoms and molecules colliding with each other and with surfaces. Let’s say you want to figure out how these atoms striking you will make a difference. Now, technically speaking, to give a full accounting of all of the momentum transfer that’s going on, you’d need to know the precise location and speed of every atom at all times, right? But when you have 10²³ atoms, this becomes impossible to measure or know. Luckily for us, we only care about what happens *on average* — after all, when you’re underwater or under the atmosphere, you don’t feel pressure as individual atoms bouncing off you but instead the time-averaged force. You can replace a detailed accounting of individual atoms with statistical mechanics, thermodynamics, and statistical averages. It’s a model — the information is incomplete, and so in some ways, it’s “wrong” to pretend that the pressure pushing down on you is uniform and not actually made up of many tiny atomic bumps. But it’s good enough for purpose.

Maybe you know the laws of physics, but actually performing the explicit calculation would take far too long without a noticeable difference in the result. A classic example of this is climate models, which divide the world and the atmosphere up into lots of little grid boxes. Then, for each box, temperature, pressure, precipitation, whatever is calculated based on the known laws of physics and influences from surrounding boxes. Processes that take place in areas smaller than a grid box, such as the formation and precise dynamics of clouds, aren’t in the model directly — but through parametrizations, where we represent the complex process of cloud formation by its average effect, we can — hopefully — ensure that we don’t go too far wrong.

And, still other times, it’s because you don’t actually understand the physical processes that underlie what you’re trying to understand. In this case, you might come up with an empirical rule — something based on observations. You don’t have any explanation for how it works. You don’t quite know why this rule seems to fit the data pretty well. But it certainly seems to fit. Maybe you’ve stumbled across an approximate version of a deeper understanding, or one that holds pretty well in the regions that you’re interested in.

Take, for example, gravity. The Ancient Greeks used to think that the natural trajectory of a ball if you threw it was a parabola — a curve that bends downwards. We now know that there are several forces acting — gravity, air resistance — that determine that shape; and, without these forces, the ball would carry on moving in a straight line at the same speed forever.

Or how about Kepler’s Laws of Planetary Motion? Actually, a lot of credit for those should go to Dutch astronomer Tycho Brahe, who spent years in meticulous detail recording the motions of the planets through his telescopes. [He also lost most of his nose in a duel that started from an argument at a wedding party, and people have continually been digging up his grave to learn more about the prosthetic brass nose that he wore. I’m not saying that losing your nose means you have more free time to devote to astronomy: it’s just an interesting historical fact.] From Brahe’s observations, Kepler figured out all kinds of laws of planetary motion — for example, the fact that the square of the time period of a planetary orbit of the sun was proportional to the cube of the semi-major axis of its elliptical orbit [which is basically the long dimension in an ellipse.] But he didn’t know why these rules fit the data so well. It would take Newton’s laws of gravity to explain them. And a similar thing happened to Hubble, centuries later, when he noticed that galaxies seemed to be speeding away from us at speeds proportional to their distance. Again, at the time, there was no theoretical explanation for this — but you could use it to make estimates and predictions, and so it became part of physics.

These models are all wrong in some way. There are special cases when they fail; or they actually seem to work and give you the right answer, even though the reasoning behind them is totally wrong. In a sense, this is true of physics itself. Our understanding is still incomplete, and the calculations we make could be based on a totally incorrect understanding. Newton’s version of gravity doesn’t posit, for example, that it’s caused by curvature in space and time, like general relativity does. But you can still use it to make good calculations. You can get rockets to the moon with Newton’s laws. We deal in successive approximations to reality — and sometimes, the best you can do is throw up your hands, admit you don’t understand why something really works, and then “shut up and calculate.”

This was the dilemma facing physicists when the nucleus was first discovered. As soon as it was realised, with the work of Rutherford, Meitner, Chadwick and others, that the nucleus of an atom was made up of even smaller particles — and some of them were charged protons — they knew that there must be some incredibly strong force holding the nuclei together. For a while, scientists thought that maybe the nucleus was held together by a kind of glue of electrons — negative charges that stopped the protons from flying apart — but this model couldn’t explain the size or stability of the nucleus either.

Let’s put this into numbers. The radius of the nucleus is measured in femtometres — that’s 10^-15m, a millionth of a billionth of a metre. Atomic scales are 10^-10m, so the nucleus length-scale is 100,000 times smaller than the atom. But volume is the cube of length, so the nucleus is a million billion times smaller in volume than the atom. These protons are packed in very tightly.

The charge on the proton is the same as the charge on the electron. If you apply Coulomb’s law of electromagnetic force, you see that two protons separated by 10^-15m should repel each other with a force of around 230N. That is a truly astonishing force. The force due to the weight of an object is roughly ten times its mass, so that’s like having a 23kg object pressing down on a single proton — which weighs 10^-27kg, or a thousand trillion trillionth of a kilogram. The force of gravity between the protons is absolutely pathetic by comparison and makes no contribution to the energy balance in the nucleus. So you need a new — and astonishingly powerful force.

But this just raises more questions for the physicists at the time. The forces that were described in detail were electromagnetism and gravity — and, for the most part, in the classical limit of stuff that humans deal with every day, they can be described by inverse square laws. The magnitude of the force between two particles is proportional to the product of charges (or masses), divided by the square of the distance between them. But, classically at least, the force fields extend through all of space and time. You and I attract each other; you’re attracted to Jupiter and your pencilcase, but the relative size of the force means that you don’t notice most of these tiny little pulls. And these laws are remarkably mathematically similar; the only real differences are that gravity is far far weaker, and is only attractive: there’s no such thing as a negative mass.

But the strong force is clearly different — it doesn’t fit this nice pattern of inverse-square law behaviour. If it did, you then have to answer the question of what stops all the nuclei from just merging with each other — since the strong force is so powerful. Instead, nuclei behaved very strangely — and the more nuclei that were discovered, the more strangely they seemed to behave. If the strong force dominates over all other forces, heavier nuclei should be more stable. But they weren’t: the heaviest elements were radioactive, and decayed into lighter elements. We now know that this is because the strong nuclear force doesn’t obey an inverse square law. Instead, it’s repulsive at very tiny distances — which keeps nuclei from becoming too compressed — and attractive at longer distances — but only for about the range of a medium-sized nucleus. After that, the attraction rapidly drops off, becoming very small at short distances away. It’s as if the force just exists to bind the nucleus together, without appearing in too many other contexts.

Understanding the precise nature of the strong force turns out to be incredibly difficult. There’s a whole theory — quantum chromodynamics — that explains how the quarks, which make up the protons and neutrons, interact with each other to produce this force between protons and neutrons. It took until the 1970s for theoretical understanding of the strong force to advance significantly. So the physicists of Rutherford’s era in the early 1900s were many, many breakthroughs from any hope of a theoretical understanding of the strong force, and hence, what holds a nucleus together.

But that didn’t mean they could come up with a model that might help to explain why some nuclei were stable, and others weren’t. And this was no idle question, either. As we talked about in the last episode, Rutherford had proposed that the stars were powered by the fusion of light nuclei. Lise Meitner would soon prove that, if you bombarded certain unstable nuclei with neutrons, you could make them split apart and release energy.

The question of whether a nucleus is stable or not is really all about energy. Broadly speaking, and this is very broad, when physical systems are free to change, they usually like to settle in the state with the lowest possible energy. We always imagine a ball rolling into a ravine, into a ditch of potential energy. The most stable place for the ball to be is right at the bottom of the ditch; it has the lowest amount of gravitational potential energy, and you have to put in the most energy to shift it back up the hill.

It turns out that a similar logic applies to nuclei; stable nuclei are the ones with a maximum in “binding energy.” The binding energy essentially measures how much energy you’d have to put into the nucleus to tear it apart completely — or, equivalently, the amount of energy that would be released if the nucleus somehow formed from a bunch of protons and neutrons joining together. The higher the binding energy, the more stable the nucleus.

This is why both fusion, joining nuclei, and fission, splitting them, can release energy — it all depends on the changes in binding energy due to the process. This can get a little confusing, because *more* binding energy means *less* energy is stored in the nucleus — you need *more* additional energy to tear it apart. When helium nuclei fuse together, the result has more binding energy — so energy is released. When a uranium nucleus splits apart, the resultant products are more tightly bound — so energy is released.

Then the question becomes — can we work out a formula for the binding energy of a particular nucleus? Then, not only would we know how stable nuclei are, but we’d also be able to calculate how much energy nuclear reactions should release, or might require to happen. You’ve just got to figure out the binding energy before and after, and subtract them. The difference must be released.

Even if you don’t fully understand the strong force and quantum mechanics, can you do this based on studying different nuclei and different nuclear reactions? Can you do it based on a model that’s “wrong, but useful?” The answer to both of these questions is yes.

The formula that was used for decades to calculate binding energies — and is still used a lot today — is called the “semi-empirical mass formula.” Semi-empirical means that part of the formula is based on a model of the nucleus, while other parts are based on observations and experiments that tell us about how the nucleus really is.

It’s at times like this I wish I had a blackboard. But I don’t, so I’ll put the formula in the shownotes if you want to gaze at it, and just describe what each term is so that you get an idea of how it works. Then, armed with our vague understanding of the nucleus, we can talk about fission and fusion in later episodes!

The SEMF is so useful because it can tell you the approximate binding energy of a nucleus just based on two numbers: the number of protons, Z, which tells you the charge, and the total number of nucleons, A. So the semi-empirical mass formula starts off with what’s called the “liquid drop” model. The idea here is that we don’t really understand the nature of the strong force well enough to calculate things explicitly, but we do know some things. We know that each nucleon takes up the same volume, roughly. We can get a decent idea by imagining a droplet of water. Such a droplet is made of many tiny molecules, in the same way as the nucleus is made up of many tiny nucleons — protons and neutrons. Now, each nucleon attracts other nucleons. We know that every one of those interactions will contribute to the binding energy. To understand this, imagine you’re being held in place by many little strings — breaking each string requires a little more energy, and the total you’d need to put in would be proportional to the number of strings holding you down. But we also know that the range of the interaction must be limited, or all the nucleons would clump together and super-large nuclei would be not only possible, but *more stable* than lighter ones.



So, to first approximation, we assume that each nucleon just interacts with its nearest neighbours. That gives us as many interactions as there are nucleons. So we get a contribution to binding energy that’s proportional to A. This is called the volume term — since each nucleon takes the same volume, the total volume is also proportional to A. The radius of the nucleus is proportional to A^(1/3) in this model.

But we’ve made a slight mistake, because not every nucleon has the same number of nearest neighbours. If you’re in the middle of the nucleus, you might be touching other nucleons on all sides. But if you’re on the edge, you have fewer neighbours than nuclei in the middle. So this is corrected with a “surface term”, which compensates for the nucleons that have fewer nearest neighbours. This is proportional to the surface area of the nucleus — and, since we know that each nucleon seems to take up the same amount of space, that’s the same as A^(2/3).

Then we look at the next biggest contribution to the energy. That’s the electrostatic repulsion. And, luckily, our laws of classical electromagnetism give us a really neat way of working out the energy that’s stored in a particular arrangement of charges! If you approximate the nucleus as a sphere, you can work out a nice formula that’s proportional to the square of the total number of charges, Z² — which makes sense, every proton interacts with every other one, so there are Z * Z interactions — and the lengthscale for the interaction is around the radius of the nucleus, which is A^(1/3). So you know that the electrostatic energy is proportional to Z² / (A¹/3) , and you can work out the coefficient from nice old classical electrodynamics.

Those are the biggest three contributions, but there are two more terms that are a little more difficult to explain. They are essentially quantum-mechanics corrections to the classical formula that’s been worked out so far. Now, we haven’t covered QM in depth yet — it’s coming, I promise — but I’ll try to explain quickly anyhow.

The first of these is the asymmetry term, and this basically acts to make sure that nuclei don’t have too many more neutrons than protons. Because there’s a pretty obvious question that may have occurred to you by now: since protons repel each other via the electrostatic force, and neutrons just attract via the strong force, why isn’t it always stable to add more neutrons? And wouldn’t the most stable nucleus possible just be a huge clump of neutrons? If neutrons are the glue, why isn’t the world just made of glue? (And, hence, very boring.)

Well, a free neutron has a half life of about 15 minutes. That means that if you had a batch of free neutrons floating around, half of them would decay before you’d even finished your episode of The Simpsons. So neutrons decay pretty rapidly into protons, electrons, antineutrinos… and usually can’t travel particularly far. Naively, you might think this explains why we don’t see clumps of neutrons forming very often. But, of course, on closer inspection this is no answer at all. After all, the neutron is stable inside the nucleus; whatever the strong force does, being bound seems to prevent the neutrons from decaying; so why wouldn’t a clump of neutrons be stable? And why wouldn’t adding more neutrons make the nucleus more stable?





It turns out that both questions can be answered due to properties of the strong force, and quantum mechanics. One really subtle point is about how neutrons and protons bind together, which owes to something called isospin… without getting into it too much here, it’s possible for a proton and neutron to bind together slightly more tightly than two neutrons, or two protons. The strong force turns out to be just strong enough to keep a proton-neutron pair together, but two protons or two neutrons together are unstable. And, by one of those quirks of physics that really start to pile up when you go hunting for them — it’s actually only a very small difference in energy, around 0.1MeV or substantially less than the electron mass, that is the difference between a universe with stable proton-proton pairs and stable neutron-neutron pairs. If the strong force was just that teensy bit stronger, or didn’t care so much about spin alignment, they would be stable. I don’t know what a universe would look like if that was the case, but I imagine it would be very different in all kinds of ways — chemistry as we know it might look very different.

The other thing to keep in mind when trying to figure out why there’s an asymmetry term, and why there aren’t loads of neutrons clumping into the nucleus, is quantum mechanical in nature. The Pauli Exclusion Principle tells us that no two fermions can be in the same quantum state. That means that no two protons, or no two neutrons, can have the same momentum, spin, and so on.

The reason that this is important for the nucleus is that only so many quantum states are available. And because they can be “filled up” by neutrons and protons, when you add more neutrons to the nucleus, you’re filling up the available states for neutrons. The lowest energy states — with the lowest momentum — fill up first, but if you keep adding neutrons, eventually, they’ll have to be in higher and higher energy states — all the lower states will be full. So you need more energy to have this arrangement.

This is where the asymmetry term comes from. It’s proportional to the difference between the neutrons and protons, squared. It essentially tells you that there’s an energy penalty for stacking up too many protons and neutrons. Now, there’s competition between the asymmetry term and the electrostatic term. Because protons repel each other, it’s good for the nucleus to have fewer protons than neutrons. But because there’s an energy penalty for stacking up too many neutrons due to Pauli exclusion, you can’t just make things more and more stable by adding endless neutrons. It turns out that you can work out, based on the ratio of these terms, the ideal ratio for neutrons and protons. So stable nuclei tend to have slightly more neutrons than protons. As an example, one of the most stable nuclei you can get is iron-56 — which has 26 protons and 30 neutrons.

The final term in the semi-empirical mass formula is the pairing term. This one is also quantum; but it is essentially due to the fact that we’ve explained before. The nucleons are stacking up in their energy levels. Their spins can be up or down. So for every energy level, you have two spin levels, up or down, associated with that same energy. That means that if you have an even number of neutrons, you can completely fill up all the energy levels. But if you have an odd number, the top one has to be half full, and therefore you have a slightly higher energy penalty for stacking up neutrons or protons in a way that results in an odd number. The pairing term essentially accounts for this, and penalises the nucleus for adding a neutron or proton that bumps you up into a higher energy level.

And this essentially captures all of the dominant contributions to the energy of a nucleus. You’re competing between the liquid-drop like terms, the volume and surface term, for nucleons that attract each other by the strong force. Then the electrostatic repulsion of the protons. And then quantum mechanical effects due to Pauli exclusion, energy levels in the nucleus, and the “stacking up” of protons and neutrons. And with that semi-empirical mass formula, you can calculate an awful lot about nuclei. You can figure out pretty decent pattern for which nuclei should be stable, and how they’ll want to decay, just based on their binding energy. You know how much energy nuclear reactions will release, and that’s pretty important. When you’re thinking about a new nucleus, it’s probably the first thing you want to evaluate and compare. And you can plot it as a function of A and Z, the number of protons and neutrons — there’s a pretty good plot even on Wikipedia — and you find that the nuclei with the higher binding energy tend to be the most stable and common ones out there for their particular elements.

It’s a model. It’s not complete. There are things it can’t explain — weird patches of stability, nuclei that are sometimes referred to as “magic numbers” that are extra stable and require more in-depth understanding of a nuclear shell model. But in 1935, it blew everything else out of the water.

In the last episode of this saga, we talked to you about Rutherford’s discovery of the nucleus. How the general public gradually realised that this new field of study wasn’t just going to be some kind of esoteric pursuit for physicists with insatiable curiosity and rather too much time on their hands, but that it had the potential for technological applications that would change the world forever.

And this could not have been more clear to the powers-that-be by the time Carl Freidrich von Weizsacker came up with the semi-empirical mass formula. Just four years later, the pioneering work of Lise Meitner and Otto Hahn led to the discovery that you could make nuclei of uranium split apart if you bombarded them with an additional neutron. Weizsacker, and — by his reckoning — hundreds of other physicists instantly realised what this might mean. There would be a potential for a chain reaction, with each fission triggering multiple additional nuclei to split apart. And who knew where this chain reaction would end? The result could be a sudden, massive release of energy. A nuclear explosion.

During the Second World War, Weizsacker — who came up with the semi-empirical mass formula — was sucked into the atomic bomb project. Initially working on more peaceful applications for nuclear energy, in 1940, he reported to the army that “energy production” from refined Uranium was possible. By 1942, this had become decidedly more concrete. He filed a patent on a “process to generate energy and neutrons by an explosion… e.g. a bomb”.

It’s hard to say how dangerous the Nazi bomb project was. Undoubtedly, Heisenberg and Weiszacker worked on it; historians will forever debate how far they got and how enthusiastically they pursued their goal. When Weizsacker’s lab was raided by American soldiers in 1944, they found no evidence of major progress towards a workable nuclear bomb. After the war, several of the physicists involved said that they had pursued it half-heartedly, hoping that the project would fail for fear of what the Nazis would do with such a bomb. It seems that, towards the end of the war as the Red Army turned the tide on the Eastern Front — all the resources had to be devoted to the war effort, and weapons that worked, rather than weapons that might work. If things had gone differently, who knows what might have happened?

Weizsacker made some mixed statements about this himself. In 1945, immediately after the news that the Americans had the bomb and had dropped it on Hiroshima, Weizsacker was detained in Cambridge — and he was secretly recorded as saying “I believe the reason we didn’t do it was because all the physicists didn’t want to do it, on principle. If we had wanted Germany to win the war we would have succeeded!”

This was the reaction immediately after the war. But things weren’t always this unambiguous. Other historians have suggested that this was “concocting” an alternative version of history — one that, nevertheless, Heisenberg and Weizsacker had to persuade themselves to believe. Historian William Sweet referred to it as “a thin and repugnant effort at rationalization”, and Weizsacker’s statements later on in life were mixed.

For example, he admitted in 1957: “We wanted to know if chain reactions were possible. No matter what we would end up doing with our knowledge — we wanted to know.” He said that they were ‘saved by grace’ from the temptation to make the bomb.

This was the strange world that nuclear physicists found themselves in in the 1930s and 1940s. Before, they had been intellectual leaders: respected academics, yes, but hardly figures of pivotal importance in geopolitics. As soon as it was considered possible to build a nuclear bomb, however, the information, knowledge and understanding contained within the brains of these physicists became a dangerous — historically pivotal — thing to possess.

As is the case with many physicists who worked on nuclear weapons, Weizsacker — in some senses — seemed to spend the rest of his life trying to make the world a better place. He devoted ever more time to studies of philosophy and politics. In the episodes on Stalin and the Scientists, we described how Sakharov — the brilliant Soviet physicist who was pivotal in their bomb project — defected from the USSR, at great personal risk, and eventually won the Nobel Peace Prize for his activism in favour of peace and against nuclear risks. Weizsacker spent his later life as much in philosophy as physics, writing passionately about nuclear war, arguing that West Germany should be stripped of its nuclear weapons. He formulated a plan for a world government, as part of the “One World Or None” school of thinking: nuclear weapons, and the newfound capacity of humanity to destroy itself in struggles between great powers — must therefore mean no more great powers. He wrote about the inequality between what was then called the First World and the Third World countries, and the dangers of environmental destruction and degradation.

What motivated so many nuclear physicists to become pacifists, activists, and write in favour of peace? Personal convictions? A desire for atonement? Hoping, somehow, that Pandora’s box could be sealed shut again — or, at any rate, that the world could be run by rationality and kindness, rather than tearing itself to shreds in a fit of rage and misunderstanding?

I feel that those who helped, however unintentionally, to make the atomic bomb possible — they had a deep understanding of how it feels for things to rapidly spiral out of control. The chain reaction that occurs at the heart of a nuclear bomb is the same chain reaction that scientific discoveries can often create. Innocuous at first, the consequences snowball and butterfly into something quite unintended, something that moves beyond the familiar environments of the lab and academia, and changes the world. It’s easy to see why, when you’re more aware than most about the possibility of unintended consequences to cause great harm, you might want to live in a wiser and more careful world.
Next episode, I’ll talk about that discovery of nuclear fission mentioned earlier. Then we’ll go into a history of the use of fission as a form of nuclear power. Then we’ll loop back around and launch into a series on nuclear fusion — from its early days when Edward Teller dreamed of constructing a “Super” bomb, through all the dreams of harnessing its use for peaceful means.

For now, though, I’ll leave you with a quote from Carl Friedrich von Weizsäcker, who deduced the semi-empirical mass formula and worked on the Nazi bomb. “The problem is how humans interact with power… In view of the possibility of reason and peace, power is not necessarily the last word. That is for history to decide.”


Thanks for listening etc.

--

--