Technology, inequality, and global catastrophic risks

These podcast scripts were written originally in 2018, and updated in 2020. They relate to a nexus between technological development, rising inequality, and global catastrophic risks. You can download the original episodes here: https://physicalattraction.libsyn.com/size/5/?search=inequality

Technology, inequality, and catastrophic risks: The Great Leveller

Hi all, and welcome to this episode of Physical Attraction. This episode, our theme is technology, inequality, and global catastrophic risk. I’m going to be talking about where these three — technology, inequality, and catastrophic risks — intersect: how they might fuel each other, and what we can hope to do to live in a fairer and less risky world. All in a day’s work!

In writing the TEOTWAWKI specials from a few years ago, which predominantly focused on all of the ghastly future projections for how the world might end — I’ve read some wonderful writing about how we should think about, categorize, and consider the risks that could lead to terrible events. We’re talking about mass casualties, a breakdown in the social order, a severe depression in what you might call progress for the species, and perhaps even the extinction of humanity — or even all life on Earth. These are weighty topics, and, so often, it seems to be so far into the realm of abstraction that it’s difficult to know how it can possibly relate to the world today. And it can often seem completely outside of our control. In the case of supervolcanoes and asteroid strikes — they were once called acts of God by a reason; they’re difficult to predict, and they don’t stem from any human actions. For a natural pandemic, it almost seems like a unique confluence of circumstances has to take place; you need the right virus to have the right properties when it jumps over to humans or reappears. It needs to appear in the right place. It seems like rolling a vast cosmic dice, and hoping you continue to get lucky.

There are threats like that. But there’s usually some way humans can intervene. We can dream of tracking and deflecting asteroids; stamping out would-be pandemics; responding effectively to natural disasters, mitigating climate change, building responsible and “safe” AI, and electing leaders who won’t use nuclear weapons. I truly don’t believe any of these apocalyptic scenarios are inevitable — not even close. I think all of them can be addressed if we understand the risk — which is why it’s so important to examine these risks in as much detail as possible, with as much understanding of the causes and consequences as we possibly can. It’s like suddenly noticing you have some strange new medical symptoms. Our instinct is to ignore it, to try not to think about it, to not be so gloomy and hopefully let it go away. “If I die, I die; it was meant to be.” This is an attitude we can have when we feel afraid and powerless. But of course, regardless of whether we have a cold or a terminal illness, it’s best to find out what it is, and what we can do about it.

I don’t think things are inevitable. I don’t believe in that kind of historical determinism. Mainly because people who do believe in historical determinism generally end up being embarrassed. Marx and Engels held that capitalism would inevitably collapse under the weight of its own contradictions, leading to a socialist utopia. Didn’t work out so well. Francis Fukuyama, when the Cold War was ending, wrote a famous piece called “The End of History?”. The idea here was that Westernised, liberal democracy — having triumphed over first fascism and Nazism, then socialism and communism, and then the totalitarianism of the USSR after it devolved far from those original Communist ideals — Westernised, liberal democracy was triumphant and — eventually, once all of the kinks were ironed out — the world would look like that: this was the final form of all governance.

A few decades on, it seems naïve to say that history ended in 1989: even Fukuyama admitted as much in a recent op-ed, although he insisted that society was definitely evolving towards some eventual state where everyone was governed by liberal democracies — this, as opposed to totalitarian systems like Nazi Germany or Stalinist Russia, or anarchic systems, or else monarchies or anything like that.

Maybe historical and political development does have an end state. I look at the system like a physicist; we look for steady states in a system, and assume that, eventually, if nudged around enough, the system will probably find itself at a stable state. What does that look like for the human race? “Everyone is dead” is a pretty stable state; one could argue that “everyone is fine and no longer wants for anything” or “humans evolve to a point where they no longer need anything else, or colonise some part of the universe” or “AI takes over and waits until the Universe gets cold enough to do calculations” is another steady state.

But predicting that we’re anywhere near a steady state, or that final transformation that will take us there, or that it’s inevitable that things end up like this — that seems foolish.

And yet it’s undeniable that this last century, since the nuclear bomb arrived, has had an ever-multiplying potential for existential risks — genuine pathways that could take us to that bad “Game Over” screen for the species. And this all stems from our rapidly expanding technology. Technology makes cyberwarfare something to fear; it allows Donald Trump and Vladimir Putin to kill millions with an order; it allows us to imagine bioweapons and artificial intelligence that exceeds our capacity, both of which might — mindlessly, in their own ways — do away with us.

There’s another trend that has taken place since the Second World War. It has by no means been uniform; there are always exceptions, of course. But it’s fair to say that inequality has risen, too. I want to persuade you that inequality and existential risk are intimately linked, and that an unequal society is a kind of catastrophic risk for the species in itself.

First, looking globally. 71% of the world’s population shares a measly 3% of the world’s wealth. Meanwhile, almost half of global wealth is owned by just 1% of individuals. At the very top, the ratio gets even worse, with 12.8% of the world’s wealth owned by 0.004% of the world’s population. In other words, an average super-rich person — in that top 0.004% — owns as much as 76,000 average poor people, where 71% of the world qualifies as “poor.” We have a situation where individual people own more wealth than entire countries. According to an Oxfam report, the richest 8 people combined own as much wealth as half of the world’s population.

I say all this not to demonise rich people. For many of these statistics, I am closer to that top 1% — the hideous, selfish people who horde the world’s wealth — than the bottom 50%. In fact, an income of around $32,000 a year will put you straight in the top 1% of earners, and if you earn more than $16,000 a year, you’re still in the top 10%. — I’m not going to ask how much everyone makes, but just to give us all a sense of perspective.

And this is not to say that things are necessarily all doom and gloom either. In fact, global income inequality is gradually falling as globalisation takes hold. This brings its own problems to the societies that were once the wealthiest — and inequality is still shocking, and only falling slowly. It’s essentially because the baseline was so low. If a billion people live on 50 cents a day, and that gets boosted to $1, then the cost is only a few hundred billion dollars a year — less than the money the US government borrows every year in its deficit — but you’ve doubled the income of a billion people.

But inequality remains incredibly high, and there are powerful forces fighting to keep it that way. Globalisation may be dragging up the poor, but it’s dragging up the rich far more — as evidenced by the fact that in the last 20 years, their incomes have grown 182x faster. Otherwise, it probably wouldn’t be allowed to happen.

You will quite regularly read arguments — for example, from people like Stephen Pinker — who point to the incredible progress that has been made in many different domains as evidence that things are getting universally better for everyone. And, indeed, the statistics are impressive: we’ve utterly wiped out diseases like smallpox, and are reducing deaths from all kinds of preventable diseases all the time: the average person in the West today lives in luxury that would have been unknown to all but the kings and lords of a few hundred years ago, and the rest of the world is gradually catching up: infant mortality is down, life expectancy and literacy up, wars and violent conflicts in general are growing less frequent over time, etc., etc.

All of this is true, of course, and shouldn’t be forgotten… but sometimes I find it difficult to get too enthusiastic about these things. I worry that there’s a tendency to say that “the world is getting endlessly better, obviously” as if this is some kind of excuse for inaction: as if things aren’t still radically and horribly unfair in society and in the world in which we live: as if there’s no injustice, just because most people are statistically somewhat better off than they were decades ago. For example, one of the favourite statistics of this kind of “optimist” is, for example, over the last 25 years we’ve had a billion fewer people in extreme poverty. That’s great. But look at what it actually entails. Extreme poverty is living on less than $1.90 a day. 700 million people still live in extreme poverty; so even if they all currently live on exactly nothing, we could have NOBODY in extreme poverty for around $500bn/year. Between 2017 and 2018, the amount of wealth owned by billionaires — just 2,200 people around the world who have more than a billion dollars — increased from $7.7 trillion to $9.1 trillion, so by about $1.5 trillion. In other words, if we were willing to have a society just fractionally fairer — billionaires get to keep all of their money, and their wealth can actually continue to grow, but only, say, two-thirds as fast — we could utterly eradicate extreme poverty overnight. So should we really be happy with this state of affairs, as a billion people scrape by on combined income that two-thousand billionaires would barely notice missing?

And, of course, you can’t reduce everything to a simple number. Saying that “the world is getting endlessly better” ignores what we’re sacrificing. The way our society is organised at the moment is environmentally unsustainable: we’re consuming the Earth’s natural resources at an astounding rate. And risks can increase, even as things get better — and when those improvements are driven by increasing technology, so are the risks.

Within societies, inequality is also increasing. The Gini coefficient often gets used to express inequality on a scale from 0 to 1, based on the distribution of wealth or income in that society. 0 is a perfectly equal society; 1 is a perfectly unequal society, where one person owns all of the wealth. Now, obviously, trying to boil down something as complex, multi-faceted, and uniquely human as inequality into a single number is not ideal — you miss so much nuance and detail — but for the quickest of outlines, it serves a purpose, in the same way as categorizing climate change scenarios by the kind of temperature increase we’d expect also serves a purpose even though important details are missed. Many have pointed out, for example, that it underestimates the impacts of inequality due to the largest fraction of fortunes; and Globally, the Gini coefficient was a pretty high 0.69 — in the last ten years this globalisation phenomenon has caused it to dip slightly to 0.65. I would say this is less a case of the world tending to become more equal — individual societies, as we shall see, are getting less and less equal — but just a case of globalisation allowing some kind of equilibrium to start to happen in the shocking disparities between countries before globalisation.

So in the US, the Gini coefficient in 1969 was around 0.35. Since then, it’s steadily climbed up to around 0.45. (This is by the reckoning of the US Census Bureau; it’s calculated in loads of different ways, like pre-tax and post-tax, income and wealth, so you have to be careful, but all the graphs go up.) The US is becoming a more and more unequal society. Same is true in the United Kingdom. Same in the growing, developing economies like China and India — the economies are growing rapidly for those countries, but the spoils are shared out unequally. To give you an idea of what that means, 0.45 is around the Gini coefficient estimated for the Roman Empire — back when one man, the Emperor, would own thousands of slaves and also the entire, richest country of the entire Empire, Egypt, was considered his personal domain; he was hundreds of thousands of times richer than the Senators, who were hundreds of thousands of times richer than the equestrians, who were thousands of times richer than all the free people — who were at least free. [Although, in Rome, slaves could buy their own freedom, and free people often sold their children into slavery when they fell on financial hard times. But I can’t digress about Rome or we’ll be here forever.]

0.45 is not the highest Gini coefficient. South Africa, which struggles with the legacy of apartheid and extreme inequality, has a Gini closer to around 0.6 or possibly even higher pre-tax. That gives you an idea of about as unequal as any society is on Earth, today. But the USA once had a Gini coefficient of above 0.5 — at the height of the gilded age, in the late 1920s and early 1930s. Picture Jay Gatsby and you’re basically there.

What happened? The Second World War happened. This major economic shock to the system resulted in huge changes. Huge taxes on the rich — up to 90% in some cases — helped to pay for the war. The population is mobilised for fighting — less unemployment or underemployment. Capital, that is accumulated wealth, becomes less valuable as the state intervenes — and as things get destroyed. If you’re a homeless person in Dresden, and the entire city is bombed to rubble, then you’ve lost far less than the guy with a mansion: as terrible as the event is, the Gini coefficient comes down. And then, after this mass mobilisation warfare, you have all the social knock-on effects: people unionise, people want the right to vote, and so on. It’s no coincidence that the National Health Service and many of the socialised benefits in the UK and Europe today that have kept us from becoming quite as unequal as the US — although still not great — arose after the Second World War.

This is not a one-off event, though. Walter Schiedel, in his fantastic, methodical survey of the topic — The Great Leveller — explains that, essentially, inequality is only ever reversed through these kind of catastrophes.

He actually goes through all of human history evaluating Gini coefficients like Doctor Who with a doctorate from the London School of Economics. He goes from the 21st century to prehistoric cave-dwellers, working out the income inequality when the records weren’t particularly good in the early prehistoric era based on who got the fancy flint in their grave goods. Across a huge, diverse range of societies throughout the world — and comes to this inescapable, almost deterministic conclusion.

He argues that only revolutions, wars, and other catastrophes have historically reduced inequality. The rest of the time, inequality tends to creep up — different amounts in different locations, with different policies and different economic conditions, but generally, it creeps up. The only thing that seems to reduce inequality is some kind of large-scale catastrophe that reshapes society.

A perfect example is the Black Death in Europe, which (by reducing the population and therefore the labour supply that was available) increased wages and reduced inequality. The violent revolutions under Stalin and Mao, where the rich people were murdered or arrested and their assets were forcibly seized and redistributed — this type of event also reverses inequality and the steady climb of Gini coefficient in peacetime.

And, the last of Scheidel’s Four Horsemen — along with mass mobilisation warfare, violent revolution, plague/natural disaster, is state failure. The classic example here is when the Roman Empire left Britain in the 5th century, and collapsed in the West almost altogether. With no state around to impose order, things collapsed into lawlessness and general anarchy, and as a result everyone got poorer. Rome came and brought an aristocracy, wealthy bureaucrats, and inequality with it, and when the state collapsed and everyone descended back into squabbling, smaller kingdoms and loosely-bound local fraternities of raiders and farmers, everyone got less wealthy, and the inequality was wiped out.

There are some pretty obvious reasons why this happens, too. If there’s a disaster — say, a gigantic flood that wipes out everything — the poorest people lose relatively little wealth, while rich people could lose out on millions. The result is a society that’s more equal, even if it is poorer. If mass death is involved, as in the Black Death example, the fact that there are fewer survivors allows them to band together and demand higher wages for their labour. Revolutions forcibly seek to confiscate and redistribute wealth. And, after a disaster — or during a mass-mobilisation war, like the Second World War — the tax rates on the highest earners are generally very high. From 1941 right the way up until the early 1960s, the rate of tax on the top earners never fell below 80% and was often even higher.

— \\\ other examples from Schiedel, if relevant /// — -

However you slice it, The Great Leveller is a fascinating read — and a really troubling one. On one level, it almost seems to imply that living in an unequal society where inequality is growing is almost the price you pay for peacetime and not living through some great catastrophe. We know that societies can basically survive without these revolutions, even if unequal, providing everyone feels the benefit of economic growth. But it hardly seems like a great prescription for peace.

On the other hand, if inequality invariably increases up to a point and then something catastrophic happens, like a war or a revolution… what does that mean for us, in a society where inequality is approaching the heights of Europe in the 1930s? That decade was defined, in Europe at least, by violent revolutions from the far left and the far right. But these mechanisms for reducing inequality are all terrible. We don’t want a natural disaster or mega-pandemic; we don’t want civilization to have to collapse to solve the problem of inequality. In the modern era of war, the kind of World War II mass mobilization warfare between similarly-sized states doesn’t happen so much any more — and with nuclear weapons, any such war could escalate into an existential risk all by itself. You’re left with a violent revolution where the rich are slaughtered and their property is confiscated and redistributed as the *least violent option* — unless, of course, we can come up with something better [which I really think we should.] The prospect that our only solution to the problem of growing inequality is to just wait for some horrendous catastrophe to “level” everything is really, really bleak and depressing. Even for me.

The really depressing thing, of course, is that many of the “crises” that we might forsee coming down the line are only likely to exacerbate inequality. Chief amongst these is climate change, which is really like a great engine of inequality: it’s disproportionately caused by the actions of the wealthy, and disproportionately the burden of climate damages fall on the poor and vulnerable. The lowest-income, predominantly tropical nations will be hardest hit by extremes in heat, droughts, floods, and crop failures — and they have the least capacity to adapt to the changing climate. The dystopia from climate change is climate refugees. Already, more than 20 million people a year are displaced from their homes by extreme weather events — but the World Bank projects that this could rise to as many as 200 million a year by 2050, under climate change. In the most extreme — and, thankfully, unlikely — projections for climate change, parts of the tropics become uninhabitable — the heat stress from going outside in these regions in the summer is enough to kill you. So this is the sort of thing that keeps budding climate scientists like me up at night. And inevitably, inevitably, this is only going to exacerbate inequality.

Similarly, we can look to other trends — automation of people’s jobs through machine-learning, or even robots. If this materialises on the scale that many people have predicted, this is only going to exacerbate inequality. For a start, wealthy people will have independent means of living: they won’t have to take whatever work is going and compete with machines that work for free. People with independent wealth can take time to retrain and learn new occupations — they have breathing space to do that in a way that people living paycheck to paycheck do not. And, while people have argued that lots of traditionally higher-paying, high-educational attainment jobs could be partially or entirely automated in the future, it’s likely to be — and, in fact, we see all around us — that many of the jobs that are being automated away are the lower-paying occupations.

I suppose some of the other trends — for example, technological apocalypses like the ones we dreamt of in the TEOTWAWKI specials, way back when — might be good levellers. A nuclear war certainly would: so would a superintelligent AI that wiped out the human race. But I don’t think it’s the ideal outcome.

Technology, Inequality, and Catastrophic Risks: Does Technology Help Us?

So, here we are. The Great Leveller tells us that inequality tends to increase in societies over time until some disaster or large-scale external shock to the system shakes things out and inequality can fall again. Yet we look around us now and see that many of the trends and forces that will shape our world in the next century are only going to increase inequality.

It’s kind of hard not to come to the conclusion that, at least in the countries where growing inequality is a problem, our institutions are terrible at dealing with it. Take our solution to the financial crisis of 2007–8 — which was, broadly, a bail-out of the banks and a process called quantitative easing. Per the Bank of England: “Quantitative easing does not involve literally printing more money. Instead, we create new money digitally.” Yep. The argument here was that banks were too big to fail, and might have collapsed the entire global economy if they weren’t rescued with spending and stimulus.

I’m not an economist; I don’t know whether that’s true or not — but this policy of quantitative easing, which essentially seeks to inject money into the economy by buying up assets owned by banks… Regardless of whether or not it prevented the recession from getting even worse, it was certainly a solution that made the inequality problem worse. The stock market soared back up again, but wages have remained flat; and companies that have realised higher profits in the improved economy have used that to buy back shares rather than voluntarily increasing wages for the workers. In other words, people with assets — who owned companies, or at least shares in them — got richer, while people without them remained where they were. Even a conservative paper noted that, in the UK under quantitative easing, the least wealthy 10% gained £3000 while the wealthiest 10% gained £350,000. Regardless of whether you think quantitative easing is a sketchy redistribution of wealth to the top in response to a crisis, or the only thing that could have got us out of the financial crisis of 2008, it’s difficult to argue that it’s done anything to help income inequality.

So I suppose the question comes back to how much inequality you personally think is necessary or justifiable, or sustainable within a society. Some inequality is certainly inevitable. But how much can society sustain?

The crises that we’re dealing with lately aren’t doing anything to address the problem of rising inequality. In fact, pretty much everything that’s going on at the moment is likely to cause rising inequality. Climate change disproportionately affects the poorest and the least able to adapt — both because of where extreme weather events and extreme heat events hit, and because the poorest nations disproportionately suffer when these natural disasters arise. If you’re already struggling for food or water, climate shocks can prove fatal in a way that’s less likely to kill people in wealthier nations. Within societies, the wealthiest are likely to be able to shield themselves from the effect of climate change. And, of course, there’s the fundamental fact that globally, this has historically been a problem disproportionately caused by the wealthy where the impacts disproportionately fall on the poor: all enhancing inequality.

The system — in fact, most systems of government throughout history, if you think about Scheidel — are normally powerless to deal with rising inequality; it either cannot, or it does not want to. Walter Scheidel and the Great Leveller suggest that rising inequality only ever gets reversed by a violent catastrophe of some kind, but we don’t want that and have no idea what it would look like.

In physics and philosophy, there’s this concept called the anthropic principle.

In the same way as you have a weak and a strong anthropic principle, I also think you have a weak and strong version of the inequality-catastrophe connection, or historical determinism, or whatever you want to call it. In the weak version, pretty much only violent catastrophes, wars, revolutions, plagues, whatever — can reverse inequality, for the reasons that Scheidel points out. So whenever you see inequality shrink, you’ll find a catastrophe. But in the strong version, the rising inequality ACTUALLY CAUSES some of those catastrophes — the catastrophes are a safety valve for the rising inequality. This isn’t too dissimilar to some of Marxist theory, which holds that inequality inevitably increases under a capitalist system until conditions become unbearable and there is a revolution. But I actually think that there are other mechanisms now, in the modern world, that are becoming more prevalent and more realistic — which will link together inequality and catastrophic risk. The development of technology is the crucial thing that, in many ways, will bind them closer together.

For example: the fact that we don’t have adequate healthcare everywhere, which is in large part due to poverty, means that diseases that could be more easily controlled can become epidemics, which could become pandemics in an ever-more interconnected world. Ebola, SARS, AIDS; these illnesses first began to take hold in places where the medical infrastructure is less robust. This is a direct consequence of an unequal society. Unequal societies lead to political instability, raising the possibility of violent revolutions, civil wars, and totalitarian governments; and this, in turn, can lead to proxy wars that can escalate into larger, more globally threatening wars. Societies that break down (or are broken down) can lead to extremism and a rise in terrorism; they lead to dictators who can behave in irrational and unstable ways. If you wanted to dream of someone who would pull the trigger on some supremely destructive bioweapon or nuclear weapon, dictators are generally who we conjure up as foils.

And this is increasingly a problem because of the technological advances. Consider the risks we discussed in previous episodes — like the advances in biotechnology. CRISPR-Cas9 or similar techniques could allow people to engineer pandemics that are far more deadly than anything that can occur naturally — or resurrect banished diseases like smallpox as bioweapons. There was that study that showed that all it took was a couple of researchers, a few hundred thousand dollars, and a decent sized lab to use CRISPR to bring the horsepox virus, once extinct, back to life. The difference between a bioweapon and a nuclear weapon is that, theoretically, you may not require the same raw materials and expertise to produce a bioweapon as a nuclear weapon. Uranium is closely guarded, but you can get much of what you might need to create a bioweapon through the mail.

This isn’t necessarily a sinister thing. It’s just that as technology advances and multiplies, there are going to be more ways to weaponise it. It takes thousands of people to write the structure of the internet and programming languages; millions of people to use that infrastructure to give it its unique power — but maybe only one to write a really nasty computer virus that exploits a vulnerability and can wreak havoc. It took decades of scientific research to sequence the human genome. Now it can be done by a startup for a few hundred dollars; it might only take one person to design a supervirus or bring smallpox back.

Technological development makes doing impossible things easier and more accessible; that’s kind of the point. A single person would find it very difficult to design and build an autonomous drone, but it’s hardly a huge leap — once you have an autonomous drone, which may well be designed for perfectly innocent reasons — to strap some explosives to the back of it and tell it to go and kill someone. Nanotechnology could go this way in the future, with open-source platforms that will allow you to design and programme your own nanobots. It would seem, perhaps, not beyond the realms of possibility for one person to design and programme nanobots to self-replicate and destroy lives or property, like metal viruses, even if the grey goo scenario that people were originally worried

Compare this to nuclear weapons, which can take the efforts of entire states, and many years, to construct. It might take decades for a rogue state or terrorist group to get a nuclear capability; but developing bioweapons or cyberweapons can be done far more easily. In fact, as technology continues to advance in this way, as society becomes ever-more interconnected and interwoven, the number of people you need to create and wield these weapons of potentially huge destructive power decreases. Which means the number of people or groups who could hope to carry out an attack like this is likely to increase. Asymmetric warfare becomes more and more possible.

When the first nuclear weapon was dropped on Hiroshima, humans knew that they would soon have the power to inflict unimaginable horror. When small groups or disgruntled individuals can cause similar damage, perhaps on a global scale, how will we control it?

We may not always be able to remove the weapons from the hands of people who might use them. But we can remove the motives from their hearts. People who feel that they have a stake in society — who are contented in general — are less likely to entertain omnicidal fantasies. They’re less likely to be swayed by crime, terrorism, death cults, revolutionaries — all of these groups that may soon be terribly empowered by technology. In a more equal world, with more stable governments and more prosperity, it will be easier to universally enforce whatever laws and regulations are required to prevent new threats from getting out of hand. By allowing social and economic inequality to continue, and even to increase — leaving so many disenfranchised people behind, we are exposing ourselves to the risk of a disastrous future.

Of course, there is a counterargument to this point of view, which states that technology actually makes us safer. Technology like the internet has allowed for the spread of terrorist propaganda; the interconnectedness of society has meant that we can be vulnerable to attacks from any number of different places; and maybe someday the smallpox genome will be transmitted via the internet to whoever might want to use it. The internet, as a revolutionary technology, has also allowed for the spread of other ideas: positive ideas. It’s democratic. People can — if they can stop flinging mud at each other for five minutes — learn and debate. There’s a reason that authoritarian states seek to control that flow of information; they’re afraid of what it might do.

So what is this really an argument for? Clearly being Luddites — smashing up modern technology entirely — is not an option. Obviously, it’s the dream compromise: well-regulated, carefully controlled, responsibly deployed technology! And this requires people to get serious about understanding what new and disruptive technologies are and aren’t capable of, and seeing where they can and should be regulated.

Another notable point; what motivation do superpowers have to go to war with each other? In the old days, it was simple; the Roman Empire invaded Dacia because they had some really awesome gold and silver mines, and they carted off the natural resources. The same was true of the so-called barbarians who raided the Roman empire. But increased technology has meant that wealth and economic value is more immaterial than material. If Russia sent an army into Silicon Valley and occupied the offices of Facebook and Google and so on, they don’t obtain the power and the value of those companies. It’s more ethereal, based on ideas like intellectual property and brand loyalty. The cost of waging such a war far outweighs the benefits, which is one of the main reasons why the scenario is so absurd.

Our globally interconnected financial system would suffer in a globally destructive war, and ultimately everyone would get poorer. In less technologically developed societies, where wealth is more about the material possessions and physical things you control, the slices of the pie that you’re keeping from everyone else, there’s more motivation to go to war — before you even get into the nuclear deterrent. In this sense, technology and economic development more generally might reduce the motivations for wars.

It increases the ease and speed of international responses. A universal translation machine would allow governments to coordinate even more easily, and such a device might be only a few years away — indeed, translators of the sort are getting increasingly better. [Although you might go with the Douglas Adams idea that, by allowing everyone to understand each other, the Babel Fish was responsible for more and bloodier wars than any other species in the Universe’s history.]

Real-time communications allow several countries to coordinate responses to new epidemics; this kind of thing. And technology can act in ways that reduce inequality, making it easier for everyone to live a similar standard of life, too. It’s certainly not entirely a one-way street.

============== BREAK IN RECORDING FOR P2 part2 ====

Consider other types of technological utopia. Futurist James Burke and others talk about a device called the nanofabricator. The idea here is that you have the ultimate 3-D printer; a device that can rearrange substances on the molecular level. Then, the futurists say, all you need is raw materials; a lump of dirt here for the carbon, some air, a few trace metals, a little energy to power the nanobots, and you can have more or less anything you want. After all, what is food? What is a laptop? It’s just a particular arrangement of atoms.

Now, obviously, this is a massive oversimplification and tells you very little about how difficult it actually is to fabricate anything you want in practice. Paintings are just a particular arrangement of paint. This observation doesn’t make me Van Gogh.

One can imagine that, in principle, nothing prevents you from scanning such a thing, developing a blueprint, and using nanomachines to construct it; and if the cost of such a machine becomes affordable, then suddenly you’ve essentially done for material possessions what the internet did for information. They are liberated.

And, in such a world, you can imagine there might be incredible abundance; the value of manufactured goods and so on would drop off a cliff; we could provide people with more or less anything they want. It’s this magical machine that can create everything you want that has led other people, who think this is less realistic and feasible, call it the “Santa Claus machine”. [I hope someday we’ll do some episodes on nanotechnology and we can revisit this idea, to see how feasible it is and how close we are to it at the moment.]

Regardless of whether the Santa Claus machine could happen or not, things analogous to this will begin to happen when robotics, nanotechnology, and artificial intelligence becomes more advanced: vast amounts of human labour will be replaced by technology, just as happened when the combustion engine was invented.

What does this do? If it creates wealth for everyone, then you can imagine there’s less threat that any particular group or individual is going to want to destroy the world. Or, if society remains unequal — if access to this kind of abundant production is limited, and, depending on what you want to make, the raw resources available could be an issue — then we see issues. At the same time, these kinds of technology can “democratise” the production of consumer goods, but also more dangerous things. When the 3D printer first started becoming prominent, one of the first things that terrified people was the blueprint for a 3D-printed gun. Nowadays, there are blueprints floating around on the internet for 3D-printed AR15s. Regulation — and maybe more pertinently, any way of enforcing regulations on this technology — has already fallen far behind. And though there are no records of any murders yet occuring with a 3D-printed weapon, that doesn’t rule it out.

All this stuff about some hypothetical Santa Claus machine in the far future might seem pretty abstract; making predictions about what the world’s going to look like in a hundred years seems almost impossible. But we can try to figure out what might happen along the way to whatever that future looks like by looking at what’s happening now. How are the most recent, most foreseeable trends in technology interacting with inequality and existential risk?

Technology evangelists dream about a future where we’re all liberated from the more mundane aspects of our jobs by artificial intelligence. Other futurists go further, and imagine that AI will enable us to become superhumans; enhancing our intelligence, abandoning our mortal bodies, and uploading ourselves to the cloud.

Paradise is all very well, although your mileage may vary on whether these scenarios are realistic or desirable. The real question is: how do you get there? The economist, John Maynard Keynes, notably argued in favour of active intervention when an economic crisis hits, rather than waiting for the markets to settle down to a more healthy equilibrium in the long run. His rebuttal to critics was: “In the long run, we are all dead.” After all, if it takes fifty years of upheaval and economic chaos for things to return to normality, there has been an immense amount of human suffering first.

Similar problems arise with the transition to a world where AI is intimately involved in our lives. In the long term, automation of labour might benefit the human species immensely. But in the short term, it has all kinds of potential pitfalls — especially in exacerbating inequality within societies where AI takes on a larger role. There was a recent report from the Institute for Public Policy Research which has deep concerns about the future of work. This is the latest in a long string after the Oxford report, which famously said that something like half of all jobs in industrialised societies like the US could be automated in the next 20 years. [Incidentally, while that report is a really interesting read, we should obviously point out that just because a job has the “technical potential” to be automated, doesn’t mean that it necessarily will be automated. If you’re looking from the 1990s at the potential of the internet, you might think “with online learning courses, a lot of teaching staff are going to be made obsolete” — but obviously, that didn’t happen, even though perhaps it would have seemed technically possible.]

While this newer report doesn’t forsee the same gloom and doom of mass unemployment that other commentators have considered, the concern is that the gains in productivity and economic benefits from AI will be unevenly distributed. In the UK, jobs that account for £290bn worth of wages in today’s economy could potentially be automated with today’s technology. But these are disproportionately jobs held by people who are already suffering from inequality in society.

Low-wage jobs are five times more likely to be automated than high-wage jobs. A greater proportion of jobs held by women are likely to be automated. The solution that’s often provided is that people should simply “retrain”; but if no funding or assistance is provided, this burden is too much to bear. You can’t expect people to seamlessly transition from driving taxis to writing self-driving car software without help. As we have already seen in our societies, inequality is exacerbated when jobs that don’t require advanced education (even if they require a great deal of technical skill) are the first to go.

The optimists say that AI algorithms won’t replace humans, but will instead liberate us from the dull parts of our jobs. Lawyers used to have to spend hours trawling through case law to find legal precedents; now AI can identify the most relevant documents for them. Doctors no longer need to look through endless scans and perform diagnostic tests; machines can do this, leaving the decision-making to the human. This boosts productivity, and provides invaluable tools for workers.

But there are issues with this rosy picture. If humans need to do less work, the economic incentive is for the boss to reduce their hours, and their wages too. If, in this dream scenario, advances in automation allow one person to do the work of three — by ensuring that the dull and routine tasks are no longer necessary — then chances are a couple of people are going to be out of a job. Meanwhile, our employee — who is now line-managing a series of algorithms — is probably not earning three times the wage they did before. So the benefits of this increased productivity end up somewhere else.

Some of these “dull, routine” parts of the job were traditionally how people getting into the field learned the ropes: paralegals used to look through case law, but AI may render them obsolete. Even in the field of journalism, there’s now software that will rewrite press releases for publication: traditionally something close to an entry-level task. If there are no entry-level jobs, or if entry-level now requires years of training, the result is to exacerbate inequality and reduce social mobility.

In fact, some people argue that the mere perception of automation and AI allows some leeway for wages to stagnate. Essentially, if your employer can argue that you’re replaceable by a robot, you’re likely to be in a weaker position to negotiate your wages with them — even if the technology isn’t actually ready yet.

Ultimately, and again, I think it’s important to stress that this isn’t a prescription for Luddism. The classic counterargument to this is that , in the past, automation has never lead to masses of people being long-term unemployed, because new industries crop up to replace the old. This is certainly true — to an extent. And it would be ridiculous to say that, because — for example, a computer means there’s no need for accountants to laboriously add up rows of figures — that we should smash our PCs and keep them in work. But it’s crucial to be mindful of what can happen if we’re not careful.

Another more pernicious aspect of the problem is the way that algorithms, when deployed irresponsibly, can entrench inequalities that already exist. The adoption of algorithms into employment has already had negative impacts on equality; we talked about this in the malicious use of AI episode. Cathy O’Neil, the mathematics PhD from Harvard, raises these concerns in her excellent Weapons of Maths Destruction. She notes that algorithms designed by humans often encode the biases of that society, whether they’re racial or based on gender and sexuality.

Google’s search engine advertises more executive-level jobs to users it thinks are male. AI programs predict that black offenders are more likely to reoffend than white offenders; they receive correspondingly longer sentences. It needn’t necessarily be that bias has been actively programmed; perhaps the algorithms just learn from the historical data, but this means they will perpetuate historical inequalities.

Take candidate-screening software HireVue, used by many major corporations to assess new employees. It analyses ‘verbal and non-verbal cues’ of candidates, comparing them to employees that historically did well. Either way, according to Cathy O’Neil, they are “using people’s fear and trust of mathematics to prevent them from asking questions”. With no transparency or understanding of how the algorithm generates its results, and no consensus over who’s responsible for the results, discrimination can occur automatically, on a massive scale.

Note that: HireVue can only compare to the historical data. This is what most of these algorithms do, to a certain extent; they examine vast quantities of previously obtained historical data to make predictions about the future. Some of these are trying to predict human behaviour– as an ex-Google employee once observed, “the finest minds of my generation devoted themselves to figuring out how to get people to click on ads.”

In some cases, as with climate models and certain kinds of economic model, the predictions are bolstered by laws — the laws of physics, for example, tell us that if you heat up the world by 1000 degrees Celsius the oceans will boil, even if we’ve never seen it happen. But even then, they depend strongly on what’s happened before; on understanding and predicting historical patterns. And this exacerbates the inequality-existential risks link for two reasons. They cannot predict black-swan events; an algorithm in charge of security will find it difficult to hypothesise about something that could happen if it’s never happened before.

In fact, it was this precise failure that lead to the algorithms failing in the financial crisis of 2007–2008. You probably know by now that this was caused in part by sub-prime mortgages — mortgages sold to people who couldn’t afford to pay them back — which were then bundled up into assets and sold zillions of times over on the financial markets. It was assumed these assets were safe, because, historically, people didn’t default on their mortgages much, and they certainly didn’t do it en masse. Any algorithm that was applied to evaluate the worth of a particular group of mortgages just wasn’t prepared for an unprecedented number of people to fail to pay their mortgages; they couldn’t predict the black-swan event, even though a human who looked at the situation on the ground might be able to (and if you’ve seen The Big Short, you’ll know some of them did.)

Similarly, they encode historical biases. If the arrest rate is higher for black people in your district due to historical racism, the algorithms are going to encode that bias. If you put an algorithm in charge of deploying police resources, and you tell it to deploy the most resources where the most arrests take place, then don’t be surprised if the most arrests continue to take place where the most police are present. The more these algorithms are put in charge of the world, the more likely they are to exacerbate and perpetuate inequality: and if we accept what they give us in an unquestioning way, we’ll be in trouble.

And what is the big technological hype about at the moment? What is the top of that Gartner technological hype cycle, the buzzword that’s been plaguing everyone for the last few years? “Big Data.” O’Neil puts it best. First she describes how, through her work as a “quant”, she worked on algorithms that try to predict and make beneficial trades in the financial markets: algorithms that she blames for exacerbating the 2007–8 financial crisis. Instead of a retreat, though, algorithms advanced?

“[The algorithms] churned 24/7 through petabytes of information, much of it scraped from social media and e-commerce websites. Increasingly they focused not just on the movements of the financial markets, but on us. Mathematicians and statisticians were predicting our potential as students, workers, lovers, criminals. This was the Big Data economy, and it promised spectacular gains…. Yet I saw trouble. The math-powered applications powering the data economy were based on choices by fallible human beings. Like gods, these mathematical models were opaque, their workings invisible to all but the highest priests in their domain: mathematicians and computer scientists. Their verdicts, even when wrong or harmful, were beyond dispute or appeal. And they tended to punish the poor and oppressed in our society, while making the rich richer.”

She describes how a society with more of these algorithms can trap people in a terrible feedback loop. There’s the case of the Washington schoolteachers who used an algorithm to fire the worst-performing 5% of the staff — according to a statistically insignificant metric based on a tiny sample of students’ performance on a single test. In this particular case, one teacher had strong reason to suspect that — because these standardized test scores now controlled the future of the teachers There’s the case of the employers who judge potential employees by their credit rating. The reasonable idea here is that, if you pay your bills, you’re probably the reliable and responsible adult who will turn up to work on time and perform diligently. But if you don’t fit that paradigm — if you got unlucky, which in the US especially often means you had to pay some extortionate medical bills — then you’re trapped in a feedback loop. Bad credit rating; less chance of good employment; less chance of improving that credit rating. One of the main points O’Neil makes is that these feedback loops can be even further exacerbated when the algorithm gets to evaluate its own success — after all, it will report “I have successfully filtered out twenty-eight bad employees, I have optimised all of my metrics” and so on. In other words, the terrifying dream of the paperclip-maximising AI — that analogy that an artificial intelligence could unstoppably seek to maximise a goal that’s completely misaligned with our goals — it’s present in algorithms and present in artificial intelligence. In just the same way as many would argue the point of a good education isn’t to maximise test scores — that when a measure of success becomes the definition of success, it’s no longer a good measure. Algorithms that reduce people to strings of numbers in a spreadsheet, as well as being philosophically distasteful, can also perform terribly, and unaccountably, and then tell you that they’ve done a good job. At the same time, when the numbers influence so much of our lives — our credit scores, the rankings for our universities and colleges, the data-driven approach to hiring and firing — they motivate people to behave like that paperclip maximiser, which dumbly seeks to maximise a single number rather than doing something productive and wise. After all, if you’re focused on maximising the ranking of your college, the search engine optimisation of your website or article headline, etc. etc., you’re not necessarily doing the best job you can. You’re just maximising a metric. You might even game the system to do it. And whoever decides the rules that go into that algorithm has enormous power. The things that go into that model, and the proxies that are used for achievement, become more important than other facets of achievement do.

And this kind of automation disproportionately affects poor or disadvantaged people, because they are the ones most likely to be evaluated by this kind of system. They are the ones least likely to be able to game the system or benefit from simple metrics that measure success. Person A is born rich, gets all the best education, and takes IQ tests every day as a hobby, and scores 145. Person B is born to parents who can’t read or write, has never even seen an IQ test, but is a keen autodidact and incredibly hard worker who teaches themselves everything they know, and scores 144. It is obvious, at least to me, which person you’d rather have on your team; not from the IQ score. But person B might never find out why they got rejected. Perhaps the worst aspect is that the black-box nature of these algorithms means you’ll probably never find out what you did wrong, or how to fix it; you’re reduced to a number in a spreadsheet column, and very few people know how it was calculated. In all likelihood, the explanation doesn’t exist, because no human has ever untangled it.

Technology, Inequality, and Catastrophic Risks: Solutions?

So far, we’ve talked about the inevitable and growing use of algorithms to sift through data and make recommendations. This is a trend in governments and in corporations, and part of it is just a function of the scale of society, the scale of the decision-making that needs to be done, and the scale of the data generated. Mark Zuckerberg recently testified before Congress; the solution to virtually every problem they identified with Facebook was “better AI, more algorithms for filtering content” — that kind of thing; even when a lot of what people were mad about was Facebook’s algorithm sorting people into filter bubbles, echo chambers and allowing for targeted propaganda. But on a certain scale of data, algorithms are an easier response that at least gives the illusion that the decisions are being made to the best of anyone’s ability. Here, though, the discriminatory effects of algorithmic decision-making have been, mostly, side-effects.

What becomes even more scary is when that discrimination is the purpose of the algorithm. There have been rumblings for years that China is thinking of implementing a “social credit score” system. They’re already rolling out the scheme in cities like Rongcheng. For doing things that help the community, like charity work, you gain points. For doing things that are a drain on the community, like littering or getting parking tickets, you lose them. Drunk driving plunges you straight down to the bottom of the hierarchy.

If you have a high social credit score, you get preferential treatment. Better, cheaper access to government facilities. Better terms on bank loans. More chance to get employed by the government. In an early pilot scheme of this idea in 2010, people with high-scores were fast-tracked for promotions; people with low-scores were the worst to be fired.

It’s worth pointing out that this idea is a little more complicated than has often been reported — there’s no single, universal credit score that’s already deployed across China. Instead, it’s very much still in the development and testing phase. There are several different pilot schemes in different regions of China with different scoring systems, but they all amount to the same idea: do well and you will be rewarded — be a bad citizen and you’re punished.

On the surface, of course, you might like the idea that there’s actually some mechanism to enforce karma. As a lifelong pedestrian, the idea that people who skip red lights will lose points in some vast bureaucracy of citizenship… that’s somewhat pleasing. Maybe it would encourage better behaviour; and you can find plenty of people living under these pilot schemes who report positively that people are behaving more courteously. (Leaving aside the whole philosophical argument about whether it makes a difference if people are being kind because they’re empathetic towards you or because they fear punishment from some divine authority.) One woman, who wished to remain anonymous, was quoted as saying: “I trust the government,” she says. “Who else can you trust if not them?”

But, of course, the potential dystopian aspects to having things like your career, your access to healthcare and other government benefits, your whole future in the system determined by a single score is obvious. And who calculates the score? What goes into calculating that score? Remember all the furore lately about the personal data that Facebook has access to, the profiles that they’re building of you, and the unaccountability of that system? At least that’s only trying to manipulate you into buying stuff, or, at worst, voting a particular way in an election. You’re going to enjoy it so much more when those profiles determine how much society values you as an individual; what kind of schools you can get into, jobs you can have — even whether or not you can buy a plane ticket to leave the country.

The algorithms involved in one case were criticised as being “wildly arbitrary and unfair”, and you can guarantee that pestering the authorities with questions about why your score is so low is only going to make it plunge all the further. Disadvantaged people will remain disadvantaged under such a system. If you’re having to work multiple jobs, or you can’t pay your bills or taxes on time, your score goes down and stops you from getting employed. It’s entrenched inequality.

Some systems only deduct points for actually breaking the law; you might argue that’s not too dissimilar to current criminal records. But you can imagine all kinds of dystopian ways this system could be actively abused, or could be opaque, unaccountable, kafka-esque. And, of course, in a state like China especially, it’s about maintaining social order and the status quo: and having a vast, oppressive, automated bureaucracy watch and judge your every move is likely to lead to some pretty disaffected people, trapped in feedback loops due to their low social credit score. Exactly the kinds of people, at the sharp end of a technologically-entrenched inequality, who could unleash some of these catastrophic threats that we talked about.

You might be thinking that a social credit score is unlikely to be implemented in your country, and you may be right. But in reality, it’s just a particularly egregious example of things that already happen in a widespread fashion. All of us — all of the information that we generate by using the internet, making purchases, or simply being alive — is being collected and used to profile us, sort us into categories. Machine-learning is the only tool with the capacity to analyse this vast torrent of data, and so it is machine-learning algorithms that determine how private agencies — such as credit-rating agencies — view you. Some of the schemes in China are implemented by private companies, and not officially sanctioned by the government. If such a scheme became influential, then — without needing any kind of state mandate — these kind of algorithmic decisions could still affect your life. Imagine a private company that runs a sort of “background check”, by building profiles of individuals in this way. Don’t you think that information might be of interest to future employers? Or, heck, even future partners? One of the features that’s present in the private use of social credit scores in China is the (mercifully, opt-in) option to include it on your online dating profile.

Combine this with other demographic trends. In rich countries, people are living longer. An increasing burden will be placed on a shrinking tax base to support that elderly population. A recent study said that — due to the accumulation of wealth in older generations — millennials stand to inherit more than any previous generation, but it won’t happen until they’re in their 60s. Meanwhile, those with savings and capital will benefit as the economy shifts: the stock market and GDP will grow, but wages and equality will fall, a situation that favours people who are already wealthy.

Even in the most dramatic AI scenarios, inequality is exacerbated. If someone develops a general intelligence that’s near-human or super-human, and they manage to control and monopolise it, they instantly become immensely wealthy and powerful. If the glorious technological future that Silicon Valley enthusiasts dream about is only going to serve to make the growing gaps wider, and strengthen existing unfair power structures, is it something worth striving for?

And this is a problem with technological development as it is; when it occurs unevenly, and ever-more rapidly, the gaps between people who have access to that technology and those that don’t widen. The gap between someone with internet access and without is a vast sum of immediately accessible knowledge and information; a huge change in what you can do and achieve, as well as the number of videos of cats you can watch. But the technological changes that many think may be to come could make things even more un-even.

People talk about longevity; now that we can read the human genome, and edit it with CRISPR, it may eventually be possible to control our futures through our genetics. Now, I should say — talking to people like Britt Wray on this show has helped me to appreciate that a lot of the traits that people imagine altering, like intelligence, are very complicated and certainly not controlled by a single gene. We’re in a state at the moment where we can almost read and write the language without understanding what any of the words mean. CRISPR, if it’s used on humans, will likely start by eliminating those medical conditions that can be traced back to a single gene, or a couple of genes. But in the long run, it’s hard to see how it doesn’t get used to enhance humans; to make us live longer. Other, more exotic technologies of this kind involve concepts like brain-computer interfaces; we merge with machines, allowing us to become more intelligent — and, perhaps, attain a sort of digital immortality, minds uploaded to the cloud or merged with AI.


I’m not predicting that these technologies can or will be developed, and I’m certainly not predicting that they’ll arise any time soon. But what happens if these kind of technologies — the kind that involve a step change in human experience as big as the one from mortality, to, in some sense, immortality? The kind that involves accelerating our own evolution — what happens if these technologies arise in an *unequal society*? They all act to entrench that inequality; to make it all the more permanent. We should be clear: equality in society is about everyone having equal opportunities, rather than mindlessly doling out the same wealth to every person. But if the human race almost starts to split into two species, where one set has access to medical technology that allows them to live twice as long, artificial intelligence enhancements and biological enhancements that allow them to be physically and mentally streets ahead: how is it possible to talk about equality in such a society? Poverty is already exacerbated by effects like these: people who can’t afford to eat properly are going to suffer, physically and mentally, as a result. But these kind of technologies could make it exponentially worse by allowing a privileged few to shoot off into tech utopia and leave the rest of us behind; a division that Princeton geneticist Lee Silver describes as “GenRich” and the “Naturals.”. What hope for any levelling then? That famous quote: you live in a utopia already, it just isn’t yours.

We urgently need to redefine our notion of progress. Philosophers worry about an AI that is misaligned — the things it seeks to maximise are not the things we want maximised. At the same time, we measure the development of our countries by GDP, not the quality of life of workers or the equality of opportunity in the society. And we could do an entire other episode… or series of episodes… about why GDP is not necessarily the best thing to measure, and certainly not the only thing you should care about!

Growing wealth with increased inequality is not progress. Growing technological progress, where the benefits and rewards of that technology are concentrated in the hands of a privileged few, is not the kind of thing we want to aspire to.

Some people will take the position that there are always winners and losers in society, and that any attempt to redress the inequalities of our society will stifle economic growth and leave everyone worse off. Some will see this as an argument for a new economic model, based around Universal Basic Income. Any moves towards this will need to take care that it’s affordable, sustainable, and doesn’t lead towards an entrenched two-tier society. In the episodes we’ve done about the future of work, I talked about my scepticism about UBI as the solution. It’s fascinating how this idea can be beloved of both the far-left and people on the right, too, which suggests that it’s defined vaguely enough that people project their own dreams on it; the left see it as a kind of crypto-communist utopia, and the right see it as an excuse to simplify the benefits system and whole huge apparatus of government into a streamlined, single payment. In fact, it was Richard Nixon’s advisers who first came up with the idea of a Universal Basic Income to replace all of the government benefits that existed at that time. The Nixon people actually got pretty close to enacting it; there was a lot of momentum behind the idea, and they even did a trial run in Denver Colorado. They noted, in that trial, that despite the projections that everyone would take the money and work fewer hours or quit their job entirely, there was only a 9% reduction in working hours; hardly the death of all productivity. But the project was ultimately scuppered.

And while the idea certainly has some advantages, I am suspicious — of the idea that a society will voluntarily implement it, and of the fact that it’s regressive, not progressive. If you give the same amount of money to Bill Gates as you do to a single working mother of 3 with a chronic health condition, that’s not exactly doing anything to address inequality. The idea of a UBI in most formulations is evoked alongside the idea of a robot jobs apocalypse, where all the jobs are automated, and half the people end up unemployable. Then, essentially, to stop society from collapsing, UBI comes in on a white horse and saves us all from revolution and despair.

But if this is your scenario, it’s difficult to see how UBI doesn’t just sustain this two-tier society — the people with the advantages, who can get the skills they need to make money in the new economy, and a class of unemployable people who are kept alive on the techno-dole. The point in this society is not that the second group are lazy, but simply that, in a technologically accelerated world, there’s no way they can catch up.

And what can you do if only a small number of certain types of job still exist, which require years of experience? What if, say, the only jobs available required a PhD in Mathematics? Is it really a level playing field, even if you have a UBI? The gap between those that depend on the UBI to live and those that don’t could grow precipitously; and the policy itself does nothing to address inequality. If anything, people might take the attitude that once the UBI is in place, people have no right to demand anything else.

This is not to say that there are no circumstances under which a UBI can be helpful. On the contrary — for the poorest, it’s a lifeline. “It’s Basic Income” is a series of essays on the topic, where I encountered a case study in India written up by Sarath Davala.

There have been plenty of different trial runs of UBI, as you might expect. But this one stood out for me. It happened back in 2012. The participants were given 200 Rupees a month — that’s around £2.40 or $3 — for the first 12 months, then 300 Rupees a month for the next 5 months. In all, around 6,000 people were given the money, at a cost of around £45 per person.

So how can this help?

The simple answer is that many people in extreme poverty are caught in a debt trap. This is actually an extremely familiar situation from earlier in the history of many Western nations. In Victorian Britain, for example, lots of people who were in the workhouses were in a similar condition. Essentially, in many of these villages, there’s one person who owns a great deal more property than everyone else, and is usually a landlord for most people. If you fall on hard times — maybe the harvest fails, maybe someone falls sick and needs medical treatment or can’t work — then you go to them for a loan. Often, even to plant the crops that you need for the next year, you’ll go to the rich person for a loan of seeds and fertiliser. The result is that you’re dependent on them — they can charge extortionate prices, or offer loans with exorbitant rates of interest. In extreme cases, people will take out a loan with these individuals and be forced to work in a “brick kiln”, simply to pay off the interest on the loan — a form of debt slavery. If you’re working 2–3 days a week to pay off a loan over years and years, and the other 3–4 days a week simply to get enough food to keep yourself and your family alive, you don’t have any hope of self-improvement.

The UBI breaks the dependence on the moneylender. And this means that it actually has a permanent effect, even after the trial is ended. Before the trial, in one particular village, practically every family went to the landlord for food: this was reduced to just 5–6 people out of a hundred when they came back, four years after the trial had concluded. Many of them had moved into more lucrative forms of work, less debt-focused, in better conditions. It’s a permanent improvement in people’s lives. So it’s not a case of “give someone a fish, and feed them for a day” — it’s more like giving people money, and they’ll be able to have the free time to learn to fish. Generally, especially in the case of the very poor, and unlike traditional charitable giving, people are the best judge of exactly what they need to spend the money on — rather than whatever someone in a centralised bureaucracy might dream up for them to spend it on.

By providing just enough to help with the basic requirements for life — enough to help feed these families — this money actually brings options back to people’s lives again. And I think this is really the ultimate dream of UBI enthusiasts, both as a way of charitably giving, and a way of improving living standards even more in wealthy nations and overcoming some of the issues surrounding automation. What you buy with it is freedom to do as you want.

And this is important in wealthy nations too. There’s a widely-cited survey from Gallup polling. The most recent one, from 2018, found that 67% of workers in America were “not engaged” with the job they were doing — while 18% were “actively disengaged.” Call me a starry-eyed idealist, but, in these techno-utopias we dream up where we’ve invented robots and artificial intelligence algorithms that are capable of automating most of human labour, it does seem like we could maybe dare to dream of a world where 85% of people don’t basically dislike what they spend the vast majority of their lives doing.

We know that people need to work — that they need to have purpose. This comes out from the social sciences, which show that sustained unemployment is bad for your health and especially bad for your mental health. The irony is that there is no shortage of useful things that people could be doing if more and more menial and mundane tasks can be automated away. And I imagine that UBI advocates also would agree with Buckminster Fuller’s famous quote:

“We should do away with the absolutely specious notion that everybody has to earn a living. It is a fact today that one in ten thousand of us can make a technological breakthrough capable of supporting all the rest. The youth of today are absolutely right in recognizing this nonsense of earning a living. We keep inventing jobs because of this false idea that everybody has to be employed at some kind of drudgery because, according to Malthusian Darwinian theory he must justify his right to exist. So we have inspectors of inspectors and people making instruments for inspectors to inspect inspectors. The true business of people should be to go back to school and think about whatever it was they were thinking about before somebody came along and told them they had to earn a living.”

Personally, even though I’ve read a bit about it, I don’t think UBI is necessarily a panacea. I don’t think there is any one solution to all of our problems. The main things I’d want to see are plans that explain how it actually adds up in practice; what people end up with; and, of course, the effects on society if it’s deployed beyond a small group of people, and so on.

To be sure, if UBI comes into the right environment, if it’s sustainable, and if it works, I can see some merit to the argument that — you give people the basic means to survive, unconditionally, and anything else they do is up to them: they can try to become super-wealthy and start businesses or learn new skills to improve their quality of life, or they can just spend a lot of time outside. It could be the safety net that allows everyone — or, at least, more people than otherwise — to have similar opportunities. Even for this to work, you need to demonstrate that the numbers add up to provide everyone with a UBI that matches the cost of living, which is notoriously difficult to determine. There are all kinds of dystopian scenarios when the UBI ends up being too small, and people have to supplement it by working — except, now, with a UBI and far fewer jobs around, there’s no pressure for the employer to pay a “living wage”, so unskilled people are stuck with truly lousy conditions. Combine this with the idea that some people are going to have access to the fruits of these incredible new technologies, while others may not — and you’re back to the two-tier society again, with the gap ever widening and nothing to shrink it. You would need any such UBI to be implemented alongside a vast and affordable, if not free, “re-education” system that would at least give people the opportunity to retrain and perform one of the jobs that’s not yet been automated.

But I do have sympathy for proponents of a UBI in this respect. Remember the Scheidel book, The Great Leveller — this eerie and depressing tendency throughout history for societies to follow this same trajectory, where inequality eternally creeps ever-upward, until some kind of disaster arises that “levels” people with a great deal of suffering. Because although you may view some level of inequality as desirable in a fair and dynamic society — maybe because it keeps incentives in place for people to work — there must come a point where it gets to be too much. Or is there? Let me know what you think. But if there is, then the Great Leveller thesis implies that the only way to get out of that is via some horrendous disaster — and we really don’t want that.

So if you’re going to find some way out of that trap, then I think you need to think big. You need to have a vision for a society that works differently. It doesn’t have to be the UBI model; it doesn’t have to be anything like it, necessarily. And perhaps the picture that the techno-utopians paint frays at the edges and falls apart under closer scrutiny. But coming up with that vision for how things might alternatively work is the first step.

===

More from “solutions” / “future” bit of Scheidel

===

So I think so far in this series, I’ve hopefully convinced you — if you didn’t believe already — that inequality is a huge problem, that it’s likely to grow due to the influence of technology and the trajectory that we’re on at the moment, and that it could exacerbate all of these existential risks that we’re concerned about, as power becomes more distributed, as it becomes possible for one person to kill not just dozens but thousands of people… And I hope I’ve demonstrated that it’s a really difficult problem to solve, and that the historical solutions to growing inequality would either be catastrophic in themselves or simply don’t work any more. We need a new path — maybe, depending on who’s right about the future of work, a new economic model entirely — but even then, we have to be wary about silver bullets like Universal Basic Income that may or may not work.

I want to finish off by convincing you that inequality in itself is a catastrophic risk for our species. We’ve touched on this already, but it feels really important. First, a definition of a global catastrophic risk is helpful. It needn’t necessarily be apocalyptic; you don’t need the ‘world to end’ in the sense that everyone is killed. Nick Bostrom, who kicked off a lot of academic research in this field after noticing that there was more academic literature on the dung-fly than there was on threats for human extinction, describes one type of risk as a ‘shriek’ — and would later use the term flawed realisation.

The idea is that civilization is headed towards some kind of technological maturity, some kind of steady state like the one that we described at the start of the series. The current phase, of limitless exponential growth and rapid technological change, seems unsustainable. A better model for populations, and maybe technological growth, is the logistic curve. An s-shaped curve with exponential growth that eventually peters out at some finite value. At this point, things reach equilibrium again.

Perhaps we’ll have all kinds of radical technologies. Maybe superintelligent AI will dominate the landscape, or perhaps we’ll have molecular nanofabricators that can create abundance for everyone. Maybe humanoid robots will be a reality. Maybe biotechnology will have allowed us to enhance ourselves; superintelligent, super-strong, long-lived. It’s difficult to know what technological maturity — or even ‘the post-human era’ — will look like.

But if we are headed towards this inevitable, post-everything world: what if it’s a flawed, dismal realisation? The technology is developed, but in a bitter, wish-corrupting twist, society is ruined. Perhaps it’s used to establish a totalitarian dictatorship where quality of life is grindingly low, and there’s no hope of escape: a sci-fi dystopia. Perhaps humans have been replaced entirely by emotionless machines, who value nothing — and so there is no ‘hope’, or ‘love’, or any of the things that we might say we value in human society. But since we’re at a steady state, where the major forces of change have slowed down, it could be that we’re in a dismal world that’s almost permanently entrenched. The 21st century is terrifying because it holds such incredible risks, but with those risks, there’s also the promise that things might be better; that we could be wiser, and get this thing right. On the flip side of the coin, there are apocalyptic threats, and dismal realisations.

This is one of the categories in which inequality is a global catastrophic risk. We have already seen that rich people can live longer and afford more technological enhancements to their quality of life. But old age, infirmity, and death have always been the great levellers; inevitable for everyone. If, instead, our technology runs away and advances faster than our society — accelerating these great gulfs of inequality — we would be living in a flawed realisation.

Techno-optimists like Ray Kurzweil describe how these technologies would become the great levellers, and that eventually, everyone will ‘transcend biology’ and become ‘posthuman.’ But similarly optimistic predictions about the technologies we have today haven’t necessarily always come true. William Gibson put it best when he said “the future is already here: it’s just not very evenly distributed.” 4.2 billion people don’t have access to the internet — decades after it was invented, and years after it became nearly ubiquitous in rich countries. We have the food to feed ten billion, and still can’t end world hunger. People have predicted in the past that machines would allow us to have everything in abundance, and so there would be no need for inequality: people would just share what they didn’t need. That promise is as old as Keynes at the turn of the century, and it’s been repeated often ever since: I was looking up predictions for 2020 as we’re reaching the end of 2019 as I write this, and Time Magazine (of 1966) thought that every American would be living on an income of around $300,000, with this relative wealth provided entirely by automated labour and the functioning of machines. It didn’t happen: maybe it was possible, but it didn’t happen. Can we be certain that this time, it will be different?

In this case, for many, this is a dismal realisation. Accelerating away into the post-human world, we could divide into two groups whose lives are so substantially different that they may as well be different species. In some ways, you can argue that this has already happened, just in the last few hundred years. For the chosen few, this is utopia; for the rest of us, it can actually perpetuate a terrible system.

I’ve spent an awful lot of free time reading and writing about these catastrophic risks. But it’s not for doomy-gloomy purposes, even though I think we all secretly enjoy the vast scale, scope, and drama of the things that we’re speculating about and describing. Let’s be very clear. The Great Leveller is not a totally deterministic guide to the human race throughout all time: I don’t think history is deterministic, and deterministic theories of history do tend to get embarrassed by the facts. [This was as true of Marxism, which suggested all societies inevitably evolve towards communist utopia via the means of revolution, as it was of Francis Fukuyama’s belief that liberal democracy was the end state towards which everything would evolve.] Without determinism, we know that things may very well be different this time. Society has changed unimaginably, radically over the last century; even over the last twenty years; so has technology. This will happen over the next century, even over the next decades, if we survive them. Black swans can happen. Unforseen things can happen. Past performance is not a flawless predictor of future results. There is no historical determinism here.

But we need to escape the trends and forces that seem to be dominating at the moment. In all of these risks, all of these fields; unfriendly AI, climate change, global thermonuclear war, bioweapons or nanotech, and global inequality… we have the power to address them. These are problems that, in some sense, we have created — and, by our actions, continue to create every day. “We built them; we can take them apart.” But we have to be wiser; we have to be smarter; we have to find a better way to do things. We have to concentrate on what each other has to say for longer than the length of a Tweet.

In history as in life, things aren’t like the movies. If this was a Hollywood film, the moment the main character realised what the problem was and identified it, it would be pretty much instantly fixed. In reality, I think, it’s very rare that identifying our problems allows us to fix them so easily. Instead, figuring out what the problem is is just the start on an incredibly long, winding, and difficult road towards solving it. This is an incredibly complex problem that is unlikely to have a simple solution: complex problems almost never do. We have to find a way to be one of the only civilizations in all of human history to reverse this trend of increasing inequality without a violent catastrophe: that is a task of immense historic magnitude. But the more we talk about it, the more we think about it, and the more we prioritise it, the more likely we are to find a way.

Thank you for listening to this series from Physical Attraction.
I really hope that, even as I’ve stumbled around exploring this vast topic, you’ve all been thinking about it and have come up with far better ideas and critiques than me! Let’s make this a discussion. You can get in touch via www.physicspodcast.com, follow us on Twitter @physicspod. If you want to do your bit to address the inequality between podcast host and listener, and if you’ve enjoyed the show, please tell your friends about it and consider donating via the Paypal link on the website.

Until next time; be kind to each other.

www.physicalattraction.libsyn.com