TEOTWAWKI Episode 3 — The Singularity (2017 Podcast Script)

I feel like, for a lot of physicists and science enthusiasts, they can trace the origins of their passion to reading a hell of a lot of science fiction when they grew up. This is true for me; I loved reading as a teenager (still do!) and sci-fi was my favourite genre by a country mile. There is a little room in a second-hand bookshop in Hay-on-Wye which was TARDIS-like; so much bigger in terms of book content than it seemed for the outside, and all of them selling for pennies. It was there that I was introduced to the dystopian futures of 1984, Brave New World, Fahrenheit 451 — which influenced my thinking so much ever since and probably explain why I’m sat here writing about the end of the world rather than rainbows and kittens. And, like I’m sure some of you did, I read a lot of Isaac Asimov’s short stories about the Three Laws of Robotics. Remember them all?

1) No robot shall ever harm a human, or allow a human to come to harm.
2) Robots will obey human orders, except where it contradicts law 1.
3) Robots will defend themselves, except where this would contradict law 1 or law 2.

And most of those short-stories were essentially a “look-at-me-I’m-so-clever” type of logical mind-puzzle rather than compelling narratives with well-developed characters; Asimov would put his robots into various complicated scenarios and demonstrate how brilliantly clever his laws were, or where there might be loopholes that people would need to consider. Of course, you might prefer the satirical version that sci-fi aficionado David Langford proposed:

  1. A robot will not harm authorized Government personnel but will terminate intruders with extreme prejudice.
  2. A robot will obey the orders of authorized personnel except where such orders conflict with the Third Law.
  3. A robot will guard its own existence with lethal antipersonnel weaponry, because a robot is bloody expensive.

These books were smart — but the one that really inspired me was Blood Music, by Greg Bear. I urge you to read this novel; it’s just amazing (and rather short for something in such epic scale.) In the book, a maverick biotechnologist creates little biological nanobots that are incredibly advanced — and refuses to destroy them when ordered to, preferring to inject them into his own body. They quickly evolve, become self-aware, and learn — crucially — to alter their own genetic material. Driving their own evolution, the creatures — called ‘noocytes’ — evolve supremely quickly, and begin to improve their creator as well, curing him of physical defects and making him a superior being — both physically and intellectually. And, then, it seems — the cells go rogue. They begin infecting other humans and reducing them to a kind of gray gloop. All of the surviving characters are witnessing apocalyptic scenes — the world horrendously transformed by this unstoppable infection. You can’t outrun it; you can’t begin to cure it — not when every individual cell of the virus that’s infecting you is as intelligent as a human, and they’re capable of pooling their intelligence into a super-computer brain. It outwits the humans; it infects the water supplies; it learns to transmit itself from person to person — and it seems like nothing can stop them. Unlike the random destruction of an asteroid or gamma ray burst — this is an intelligent apocalypse, tailor-made to destroy humanity. As the last few humans are reduced to grey gloop along with everyone else, you think you’re witnessing a rather grim portent of the dangers of promethean science.

And then, in the last few pages, the book hits you with the most incredible twist. The noocytes, the bio-viruses — they weren’t destroying humanity, but instead, preserving it. Enhancing it. Uploading it to a network. Human intelligences are stored, and restored; and they ascend to a higher plane — a virtual plane. All humans are freed of need, want, and death forever: all humans can interact with each other, and they live as omnipotent gods in this virtual Universe, capable of nearly anything they could want to do — experiencing whatever they want to experience — and cured of all the sicknesses, both medical and psychological, that can make being human such a frustrating experience. They haven’t been killed, but saved.

That’s the vision; that’s the twist; and this is the reality of the apocalypse that I’m calling the “technological singularity” — it doesn’t necessarily have to mean the end of the world in a bad way. But it’s a TEOTWAWKI all the same — partly because no-one knows what will happen (could AI destroy humanity?) and partly because, even if the rise of sentient machines creates nothing but benefits for humanity in the long run… it’s still the end of the world as we know it. Because once this happens, if it happens, nothing will ever be the same again.

So what is the singularity? Well, in physics, a singularity can be described as a point of infinite density. Some people believe that a singularity is what lies at the heart of a black hole — a point of infinite mass density that gives the black hole its bite, the mass that allows it to drag in even photons of light that pass beyond a certain point. In this case, the technological singularity is more like — a point of infinite intelligence, or intelligence that practically becomes infinite. Imagine an artificial intelligence that has the capacity to improve itself; it has an understanding of what sophistication means, and it can rewrite its own source-code. The AI can access its own design, understand it, improve it, and in cycles that continue to accelerate it will be able to become more capable, more intelligent, and better at designing itself. It’s this same idea of exponential growth, except the machines will be able to reach the fundamental physical limits on their intelligence fast. Very quickly, with access to vast computing power, such a system could suddenly and quickly slip beyond its human controllers, and then — who knows what it will do? In a world that’s increasingly automated, with a greater and greater network in the Internet of Things to tap into, you can imagine that such an artificial intelligence would quickly have real-world power; the ability to infect systems like a virus; the capacity to harness the knowledge of the internet at its fingertips. And, of course, such a code can produce as many copies of itself as there’s space for; the collective intelligence of such a system could become, in a sense, almost infinite. Which is another reason why the technological singularity is similar to a physical singularity, if they are indeed at the heart of black holes. Just as we can’t see beyond the event horizon, and understand what’s in the heart of a black hole — in the same way, we can’t see beyond the technological singularity, and what it might mean. The motives of a sentient, super-intelligent AI are hard to define. If you programme in moral values, like Asimov’s laws, but the machine has or develops the capacity to rewrite itself — who knows what it might do?

This is actually considered, by many futurists and people who study end of the world scenarios, to be one of the more plausible ways that humanity could destroy itself. Perhaps everyone watched too many Terminator movies, where SkyNet is an artificial intelligence hell-bent on destroying humanity which poses a threat to its existence; but people as diverse as Stephen Hawking and Elon Musk have warned that artificial intelligence could pose a threat to the human race. As well as these, when people from the Future of Humanity Institute at Oxford and the Centre for Existential Threats at Cambridge give their rather grim predictions for civilization making it out of the twenty-first century, AI development factors in as a pretty big contributor when they give rough likelhoods of each individual apocalypse. “As soon as the robots don’t need us, we’re gone.” And it’s a chilling scenario that’s fed into an awful lot of wonderful sci-fi over the years. But evidently a lot of people are taking it seriously. And it doesn’t take an awful lot of imagination to see how it could happen.

So you have to try to answer two main questions. One of them is: how likely are we to actually develop artificial intelligence? And the other is: how likely is it to genuinely destroy us if we did develop it?

Physical Attraction has dealt with a kind of artificial intelligence before — in our episode Seduced By A Robot, where I looked at chatbots through the ages and some attempts at neural networks to generate “creative-sounding” answers. And what you’ll realize from this is that we’re some distance from truly convincing artificial intelligence yet, even when you’re effectively trying to limit that intelligence to interacting with humans. We have some completely “programmed” chatterbots that can read to you from a long list of responses typed in by a human being to respond to individual scenarios. But that’s nothing like an actual intelligence. Humans aren’t born with such responses; we “learn” based on a combination of imitation and a reward system. As a baby, you cry, and you’re rewarded by obtaining food or attention. Gradually, things become more sophisticated, and you hopefully learn that good behaviour has its rewards. Eventually, you become sophisticated enough to compose a series of physics-based chat-up lines, and the rewards are… okay, so, maybe it’s not a completely flawless process. Neural networks are starting to approximate this; you can “feed” them with lots of input + an idea of the framework you might want their responses to take, and some parameters for improvement and learning, perhaps. They can produce responses with a degree of creativity that are appropriate to given situations; not just programmed by a human. But this is a long, long way from anything you might call artificial intelligence. Conversational machines like Siri and Cortana can hook up to search engines and databases to answer questions; but try holding a conversation that lasts for longer than a few lines, and you’ll quickly see that they’re also a long way from SkyNet.

There is a lot of buzz around AI platforms that come with humanoid robots. And these robots are becoming more and more impressive; many of them have quite incredible capacities already. I feel like it’s worth reminding people — when they look at robots struggling with things like bipedal locomotion (walking on two feet)… we as humans are the end product of billions of years of evolution; our brains, by some measures, have processing powers comparable to some of the more decent supercomputers out there — and even we take months of trying, failing, and stumbling to learn how to walk. We are really remarkable creatures, in so many ways; what we’re capable of doing and learning in the sense of moving around, responding to the environment, and playing sports… for robots to achieve this functionality, they need to make millions of calculations. Our squishy, plastic brains, with their networks and free-associations are still outperforming incredibly complicated computers. And we are still wonderfully malleable and adaptable, even if it doesn’t always feel like it. It may well be that they’ve far outstripped us in raw calculating power, but AI still lags a long way behind in terms of creating and in terms of applying its innate capacity to a range of different tasks.

There are lots of challenges, though, that the most advanced AI and computerised systems are a long way from solving. Perhaps the most advanced walking robot out there is ATLAS, made by Boston Dynamics — you can see their demo video online, and it’s really very impressive. The machine can walk over rough surfaces and terrain, balance itself when thrown backwards, and climb stairs. But this piece of kit took years to develop and costs millions of dollars. Its intelligence is limited. And you can think of countless tasks involving manual dexterity that humans wouldn’t even think twice about that this state-of-the-art robot couldn’t manage. You’ve probably done a dozen things today that would prove impossible even to this advanced technology. This is not to dismiss the incredible efforts and breakthroughs that have been made in robotics and artificial intelligence. It’s just that we have a head start of billions of years, and the ruthless process of evolution on our side.

So efforts towards robotics and artificial intelligence are focused on mimicking the best model we have — humans. Ray Kurzweil is one of the best-known writers on the singularity, artificial intelligence, and how things are likely to change in the future. He is best-known for successfully predicting many of the technological developments of the past few decades — and he predicts that, by 2045, the singularity will have arrived and rendered the future beyond that almost impossible to understand. In his book, The Singularity is Near, he describes the processes that might one day lead us towards the holy grail — a generalized artificial intelligence. Artificial intelligence at the moment is a set of subroutines, really. The first, most basic kind of machine is an automaton; it does what it does, regardless. Then there are machines that you can give commands to, that can carry out a series of different pre-programmed routines. Lots of humanoid robots in the past have been like this; there’s always a “man behind the curtain” who has carefully programmed the behaviour, and orders the machine to execute the specific subroutine. Now we’re starting to move into the realms of machines that can perceive their environments, and respond accordingly. There are robots that can plan routes dynamically, and change them according to moving obstacles. And there are algorithms that can “learn” and “make decisions”. Some of these look at the environment, assess which of the situations they’re expecting this is closest to, and then react accordingly: maybe it sees someone dressed as a burglar and reports it to the authorities, or sees a cat and doesn’t gun it down in a hail of robot bullets. There are learning algorithms that can adapt to improve at a task. My favourite example of this is a learning algorithm that was taught to play Brick Breaker — you know, that game you probably had on your phone ten years ago. Watch the video, it’s great. All the machine knows is that it needs to maximise its score; at first, it moves the paddle randomly. After a few hours of training, it’s playing like a human would. But then, a behaviour begins to manifest itself that seems like intelligence: the machine learns that a good strategy to score lots of points is to intentionally “tunnel” through the row of bricks, and then the ricochet causes many bricks to be destroyed. Yet, as impressive as this is, it’s not intelligence. The machine is not thinking “if I do this… then this will happen.” The machine happens upon a strategy that works and continues to employ it, because it has been taught what a favourable outcome is, and what an unfavourable outcome is. But it does not decide what’s favourable and doesn’t really “plan” how to get there, either. Nor could you use this for anything else without reprogramming the whole thing; the kind of generalized intelligence we have, which can be applied to lots of different problems and situations, we’re nowhere near.

By the way; we too have feedback loops that tell us what a favourable outcome is, and what an unfavourable outcome is. They’re called emotions. And, in all honesty, I have no idea why so much sci-fi seems to focus on completely emotionless robots. It seems obvious that we will recognise emotional intelligence as a form of intelligence; it’s all done subconsciously, of course, but the human feeling of empathy is not uniquely human. For a start, we certainly notice when it’s not present in humans, or when humans are simply trying to simulate it to fit in. We know that animals feel empathy. We have seen, in experiments, that rats — once caged in an unpleasantly small confinement — will often choose to free a fellow rat rather than taking food for themselves. It’s a rudimentary kind of morality, and empathy. Why couldn’t these behaviours be programmed into an artificial intelligence — and why wouldn’t you choose to include them? And then, you can imagine some wonderful things. A robot with all the love and care of a human that, with superior intelligence and emotional intelligence, would be far more perceptive. Sometimes — if you’re really good friends with someone — you can almost understand things about them that they don’t consciously acknowledge or understand. What of a super-intelligent, super-empathetic robot that had no priorities other than making you happy? Could it be possible that these machines will one day know and understand us better than we know ourselves? Could we prefer them to flawed, selfish, human company? And — if you do prefer your flawed, selfish, human — what if it were possible to upload their personality to an avatar? One that could resemble them in every possible way? One that wouldn’t die? And if you can do this, where does it leave ethics? Can you copyright your own personality? Is it illegal for someone to clone you without your consent? Do you have ownership over the person you are? Already, you can begin to see how the kinds of technology that are involved in the singularity just throw everything we can possibly understand about the world into complete disarray.

Again — as in the case of retreating into a world of virtual reality — the visceral reaction springs up and says: this isn’t right, this isn’t real, and I won’t stand for it. But we have changed in our perceptions over the years. We now accept that many people’s lives can be improved via the use of medication to adjust our brain chemistry. We are putting more and more trust and faith into intangible algorithms. Of course these things seem ridiculous and absurd now; a few hundred years ago, it was self-evident to many people that there was a God, that beating your children was a fine way to discipline them and even morally correct, and that slavery was totally justified. Our perceptions can change. Factoring this in is one of the most important things you can learn about yourself. And when the empathetic AIs arrive, and they’re so much *nicer* than humans, are you convinced you won’t be seduced?

Here’s another valid question: what does it mean for something to be real? When I touch someone — okay, let’s be realistic, when I touch an object — what happens? Electrical signals fire in various regions of my brain. And this is how I perceive everything. This is what generates my chain of thought, the sensation and sound of the keys I’m tapping, the room that I can see in my peripheral vision, the sensation of my body in this chair and the feet on the floor — all of it is just electrical signals, firing somewhere in my brain. So — here we go — if the brain is in a jar, and it’s being stimulated in the same way, to me, this is completely indistinguishable from reality. Is reality for humans a completely subjective experience? And if you had the choice between a world where your physical body, the one you happened to be born into, suffered and died — or one in which your virtual body, which could be made completely indistinguishable if you wanted, could do whatever you liked — which one would you choose? Maybe having physical bodies, one day, will be viewed as a sad relic of evolution that we can cure in the same way as replacing a hip or filling a tooth.

Hence, then, returning to trying to work out what it is about our brain that’s so damn powerful. The reality is that we have supercomputers that are already, in some sense, more powerful than the human brain. This, of course, depends always on what you mean by power. Your pocket calculator can do sums you can never dream of doing in speeds you’ll never even begin to approach. But our brains aren’t like that — when we ask them to do calculations, they’re doing so alongside controlling and regulating an incredibly complex machine (the human body). They’re accessing multiple areas. They’re regulating our internal temperature. They’re subconsciously storing information for later. They’re operating six senses at once (including the one that lets you know where all of your limbs are, a very complex problem for roboticists.) And they’re maintaining a personality, with memories, feelings, likes, dislikes, and god knows how many song lyrics all swirling around in the subconscious. Who can possibly say how much computational power that takes?

Yet scientists have estimated it; they’ve tried to express the power of the human brain in terms of raw calculation. It’s incredibly difficult, because we don’t entirely know what neurons are and how they work yet. It seems clear that the key to so much of human intelligence is not just in the neurons that we have, but the connections (synapses) that they make to each other. Every neuron has a thousand connections to other neurons. In many ways, our brain’s amazing power to connect more generally is part of the reason we demonstrate such amazing abilities. Consider the cricketer, standing at the crease; how is he able to smash the ball for six over long-on? All he can see is the trajectory of the ball. As Kurzweil and others point out, when we do this, we’re not solving the problem in the same way as a computer does; we’re not calculating the trajectory of the ball by solving incredibly complicated differential equations that take into account air resistance, and then solving even more equations to calculate precisely how much force we should apply to the bat and in what direction… instead, due to years of experience and connections that have formed in the brain through training and learning, we are translating the visual stimulus — the flight of the ball — into actions in our arms and legs. (In my case, it’s still no good, and I usually miss the damn thing.) But it is clear that this idea, the idea of connections is, is key to the human mind and its capacity to operate. And even treating each one of these like a little computer that can carry out certain operations is a vast oversimplification. You can get order of magnitude estimates. Dharmendra Modha, whose team attempts to simulate aspects of the human brain by modelling neurons that can “form or break” connections based on experience, as our brains do in the learning experience, estimates that the human brain has 38 petaflops — that’s 38 thousand trillion calculations a second — of raw computing power. Others have estimated that to simulate a human brain would require one exaflop of computing power — that’s a million million million calculations a second. By 2020, this may be within the realm of our most powerful supercomputers. It may be possible, only then, for a supercomputer to approach the raw processing power of your brain. So you should be prouder of it!

One of the key reasons that understanding the brain is so linked to the idea of a technological singularity is something Kurzweil expounds at length. In his view, in order for us to completely understand the human brain, we can only go so far with non-invasive PET and EEG scans. There are already teams that can scan entire brains of mice on resolutions of nanometres — but to do so, you have to slice up and destroy the brain, which obviously won’t work for humans. So what’s the solution? Eventually, nanobots — which can cross the blood-brain barrier that keeps our brain largely safe from harmful substances — will have to be deployed to ‘deep scan’ the brain and capture every important detail. This is the kind of resolution you need to scan the brain. And, amazingly, there are already scientists who are at least thinking about the ways that you might get these nanobots through the blood-brain barrier and into our heads. And, once they’re there, why limit them to just looking? Why not have them begin to act to improve things? Provide some extra processing power where it’s needed? Enhance our intelligence, and fix our personality flaws?

This is one of the aspects of the singularity that’s most interesting to me — because, whenever anyone showed me these visions of technological utopia, I always thought: that’s fine, but you’re extrapolating. Exponential growth can’t last forever — natural resources will eventually run out. And, I always thought, in this case, the natural resource would be our intelligence. The intellectual capacity of any one person. In the Middle Ages, there were people who could study for a number of years, and feasibly claim to “know everything” — or, at least, they’d know and understand a great deal of what was written and widely available. Now, that’s impossible. The only reason we can make any progress at all is that people are incredibly highly specialized to individual tasks. You can spend an entire lifetime in one tiny field of physics, or chemistry, or biology, or linguistics, or sociology — any field, really — and still only know a tiny fraction of the things there are to know; still only be capable of making contributions to a small area. Even collaboration can only take you so far. And so, eventually, I thought, we will saturate and we will no longer be able to grow exponentially any more. After all, if you’re dealing with problems that are too difficult for humans to solve or even understand with our dull and limited monkey-brains, problems that take decades of dedicated study just to get to the forefront of research… and problems where our intuition and natural insight are completely shot — then maybe the exponential growth will stop. Maybe, to make progress, we’ll have to reach beyond the natural limits of what humans are capable of. And then — just like the Malthusian catastrophe, and just like the peak oil optimists say — maybe we will solve the problem by expanding our own capacity. The problems might not be solvable by humans. But what about superhumans? Maybe we should dedicate all of our efforts to enhancing ourselves, and leave it to the superhumans to resolve these issues. But what if the thing that prevents the singularity from happening is the fact that we’re not quite smart enough to really understand the complexity of our own brains? What if our inability to enhance ourselves means that there are, in fact, fundamental limits to how we can change the world?

Another fascinating issue that you’ve probably thought of at this point is — what does it mean to “upload” someone’s brain, and make a perfect copy of them? Is this, in fact, a way of cheating death? Because it seems more like — to me, anyway — that you might have created a copy that is identical to you in every way, but that the line of consciousness is unbroken: the same ‘you’ that you’ve always been still exists, in your physical body, and when that dies, so do ‘you.’ It’s like the idea that one way of teleporting someone is to instantly re-create them in another location — their body is simply scanned in one portal — transformed into pure information — which is re-created at the other location. But is the original person killed? What if you could atomise someone and teleport each individual atom to re-create them millions of miles away? If the consciousness is interrupted for even a nanosecond, how does it feel? Do you die? Are you still the same person? And take another scenario. What if, instead of destructively copying someone, all-at-once, we piece-by-piece replace them? After all, isn’t this kind of biological regeneration what happens to us all the time. Is consciousness, are we grandfather’s axes? Can your consciousness — the continuity of your life and existence — seep into an improved, immortal form? And how much of this is just the same as destroying yourself? It all comes down to how little we understand death. If death is just a loss of consciousness, it’s no different to falling asleep, except you don’t wake up. There’s a symmetry to it; the billions of years before I was born, before I became self-aware; these are the mirror-image to the billions of years after I will die. We imagine, perhaps, if we don’t believe in an afterlife, that there is an endless darkness, an endless blackness, a lack of all sensation. But this is not death. It is meaningless to talk about sensation after death. What would being uploaded be like? You can get into all kinds of super-angsty teenage questions — some of which are daft and pointless but still make you think — I mean; if your consciousness was interrupted in the same way every time you sleep, would you know about it? All you know is that you awoke today with a detailed set of memories that made you feel like you’re part of a continuum, the same ‘person’ who lived through yesterday and the day before: but the nature of a good illusion is that it’s indistinguishable from reality. The singularity, if it’s technologically possible, could change what we understand by what it means to live, and what it means to die: things as fundamental as this.

So, as you can probably guess, there’s too much material for me to hope to cover it in a single episode. So, for now, we’ll leave the singularity in the distant — or maybe not-too distant future — and return to it next episode.

Thanks for listening to the TEOTWAWKI specials of Physical Attraction. Please, spread the good word about the end of the world to your friends, enemies, etc. Find us on Twitter @physicspod, and Facebook — all the usual places. Please write a review for us on iTunes; that helps me get more listeners, and, one day, I dream, my listenership will start to grow exponentially and, inevitably, by the power of e, we will take over the world….


Hello and welcome to this TEOTWAWKI special of Physical Attraction; today, once again, we’re going to be talking about the technological singularity. This is part two of a two-part series, so you’re advised to go back and listen to the previous episode if things don’t make much sense.

A very brief recap: the singularity is a lot of different ideas, but maybe they can all be summarized by the notion that artificial intelligence will one day exceed human intelligence. Exponential growth in technology will continue, and, eventually, runaway technological development will exceed anything we can reasonably understand, and change our society in unfathomable ways. Some people envision a paradise, where we have cured the problems of death, disease and sadness by uploading our brains to a virtual paradise. And some people think that this will be like a brilliantly myth-y combination of Prometheus and Icarus, and we’ll destroy ourselves in the process. But along the way, the ideas that show up — about consciousness, what it means to be human, and what it means to be intelligent — lead you on all kinds of brilliant philosophical tangents.

Another interesting philosophical question that arises when we consider the nature of consciousness is whether or not we have free will. There is a famous neuroscience argument about this, that Kurzweil quotes in his book.

“Interestingly, we are able to predict or anticipate our own decisions. Work by physiology professor Benjamin Libet at the University of California, Davis shows that the neural activity to initiate an action usually occurs about a third of a second before the brain has made the decision to take the action. The implication is that decision is really an illusion, that consciousness is out of the loop. The cognitive scientist and philosopher Daniel Dennett describes the phenomenon as follows: “The action is originally precipitated in some part of the brain, and off fly the signals to the muscles, pausing en route to tell you, the conscious agent, what is going on. But, like all good officials, it lets you — the bumbling president — maintain the illusion that you started it all.”

You might not like this neurological argument — after all, you might say, surely this is just passing the buck a little bit earlier in time. We might not be aware of the decision we’re making until after we’ve made it, until after the process has started; but that’s just a time-saving device for speedier reactions, to stop you from getting gored by a woolly mammoth, or whatever. It just changes where the consciousness shows up in the flow of making decisions. If we are our brains, then this isn’t really a concern: which neurons fire in response to something, that’s just who you are! In some ways, that’s all that you are. (Which is still an awful lot.) There’s still a part of my brain that reacts to stimuli in a certain way, and how it chooses to react — that’s my free will. Well, yes, and no. Right now, as you are at the moment, your reaction to any given situation might be ‘set’. That is to say, if I showed up and offered you a sandwich, or brandished a knife in your face, your reaction is already predetermined; and it’s determined by a combination of your life experience, and your genetics, neither of which you have any control over. If you’re headstrong, you might steal the sandwich. If you’ve spent years learning martial arts, you might disarm me. If you’ve just been listening to loads of true crime podcasts and you’re terrified of being murdered, you might run away. This idea that things aren’t predetermined isn’t something we can sense. Awareness of how you might react, for example, might change how you do react; so we never see our decisions as being decided for us, beforehand. But we might not fully understand, or have access to, all of the things that go into decision-making, even if we think that we do. Just ’cause you feel it, doesn’t mean it’s there.

The brain has incredible powers to rationalise after the fact. An example of this is a famous study that involves people whose left and right brains are no longer connected — a last-ditch treatment for severe epilepsy. Since each half of the brain broadly controls one eye, by showing images to a single eye, you can stimulate each half of the brain separately. They once showed pornography to one eye, and the person’s reaction was to blush and giggle. When they were asked why they’d done this, the left brain — associated with verbal abilities — created elaborate explanations for why they’d just done that. ‘Confabulation’, as it’s called, allows us to bridge many gaps, explaining our actions and making up for deficiencies in our own self-understanding. And this confabulation raises all kinds of question about the way we live our lives. Neuroscientists believe there are perhaps 80,000 spindle cells that deal with high-level emotions; they deeply interconnect with many other parts of the brain, and could be responsible for things like falling in love, or guilt, or the euphoria that comes from listening to your favourite piece of music. Then, it’s left to the rational part of the brain to explain and make sense of where these feelings and chemical rushes come from.

So, when a decision happens, are we just retroactively justifying it to ourselves — and making it look like it was born of our free will? So, runs the argument, is any of this ‘free will’ when it’s all predetermined? Or is it just an illusion? We look at our behaviour and see that we act according to certain principles we have; morality, laziness, a sense of duty — these are the things that can lie behind the choices we make: but are they the most important aspect in making the decision, or a story we tell ourselves? And where do they come from? If we are criminals, at what moment do we become culpable? If it’s all inevitable, and no choice is involved, what does that mean for guilt?

All of these questions are philosophically interesting; but free will is like the value currency: as long as we all believe in it, the fact that it might be something of an illusion doesn’t really matter: it won’t change how we live our lives, and how we treat others. And it might not bother you — the idea that everything you do is predetermined, or dictated by powers beyond your control. Being the master of your fate, the captain of your soul… does it matter whether you are, or just think that you are? But what about an artificial intelligence? Current artificial intelligences are the model of creatures with no free will whatsoever: they carry out instructions to the letter. What happens, though, if they can rewrite their own code? If we could rewrite our own genetics, and decide to be less cowardly, more selfish, less spontaneous, or more criminally insane?

I’m not a techno-optimist. In fact, in a lot of ways, I think you can trace a lot of the problems in society down to the fact that our technologies have developed way, way faster than our intelligence — both our intellectual intelligence, and our social and emotional intelligence. This is why we have the capacity to destroy the entire species with nuclear weapons and yet we’re still incredibly irrational creatures. All around us, we see society — morality — whatever you like — struggling to keep up with the pace of change of technology. We haven’t yet adapted to so much of what has changed in the last ten years, let alone the last hundred. Politically, philosophically, the consequences of things like the French Revolution are still shaking themselves out! There are very few utopian advances; all scientific and technological advances come with problems, issues, and these problems can potentially be apocalyptic in nature. Harnessing the power of the atom gave us a new source of energy and the ability to unleash untold destruction, as well. The same will be the case with these singularity technologies. Our morality, our emotional intelligence is not going to be able to adapt in time. Do you think it’s even adapted to the Internet yet? And, if the thought of nanobots that can cross the blood-brain barrier sends shivers up your spine, be thankful that’s the only thing running around in your central nervous system. I believe that this technological development will be like all the others in this exponential world — it will blindside us and there will be a dangerous period of adjustment where a lot of things could potentially go wrong. But I am also a pragmatist; I like to think I’m a realist. These forces are very likely to be unleashed, at some point; and, when they are, there will be no stopping them; there will be no banning them; and there will be no turning back. You can see lots of people arguing that humans won’t consent to this kind of transformation, because we violently reject it as unnatural — but we won’t always feel that way. And if the technology gets good enough, the same forces that mean neo-Luddites struggle today will force us into adopting them. It could become as necessary to adopt superintelligence, or nanobots in your bloodstream, as it is to have an email account today. Just like nuclear weapons, the Internet, globalization, and even cars: they will be here to stay, irreversible.

But the singularity, or at least, the apocalyptic versions of it, ask the question if this exponential growth isn’t just going to outrun us entirely. Perhaps when artificial intelligence exceeds human intelligence, humans are doomed to be replaced; or else, when we have nanorobots that can self-replicate, what’s to stop them from converting the whole Earth into more nanorobots, in a “grey goo” type scenario? I think during the Fermi/Drake episode, I talked about how a lot of people think that the best way of communicating over long distances would be with “Von Neumann” probes — little probes that can identify the material they’re made out of, and make more of themselves, thus spreading across distant galaxies and over vast distances. But one can imagine a mindless set of nanotech like this just dissolving everything, converting everything into itself. This could happen by mistake, or it could be used as a weapon of warfare or terrorism. Again, it all comes down to this idea that human morality is not evolving as quickly as human technology and human capacity. This has led to so many problems in the modern world: will it lead to our extinction in the long run?
So let’s get into the apocalyptic meat of this story. You’ll notice that these episodes have contained a lot of questions, and not very many answers; but it’s in the nature of the beast: transformations we can’t anticipate.

Let’s imagine that this is possible; and that we one day create an artificial intelligence that can improve itself. How could it destroy us? Most people don’t think that this is necessarily going to follow that other great creation-destroying-creator story — the story of Frankenstein’s Monster. The problem with Frankenstein’s Monster was that the monster was all too human: capable of love, a sense of loss and betrayal, anger and rage, but unable to find his place in society. But there’s no reason to imagine that a superintelligent AI would necessarily have emotions, unless they were programmed into it, or they somehow arose naturally out of its evolutionary process. (In fairness, that’s what happened to us, but I’m still not convinced it’s the best possible system.) The idea of ‘evil’ or ‘angry’ artificial intelligence doesn’t seem so likely, though.

But, there’s one thing that anyone who’s ever coded will know: computers can often do what you tell them to, regardless of how stupid that is. If you make a mistake, and accidentally tell it to run the programme forever without stopping, it will do just that. AI might carry out commands literally if we specify them imperfectly, rather than understanding what we *want* to happen. Dangerous times. What if we give the intelligence a specific task, and it sets about doing that? For most AI systems, this involves finding the maximum of some function — the maximum utility, you might call it. There’s a great quote from AI researcher Stuart Russell:

“The utility function may not be perfectly aligned with the values of the human race, which are (at best) very difficult to pin down.”

So there’s the simple, dopey scenario where we tell the AI to calculate all the digits of pi, and it immediately realises that this would be easier without being disturbed by all of these pesky humans — quickly wipes us all out — and then is left in peace to fulfill its function. After all, pesky things like “morality” and the value of human life really just get in the way of achieving your goals sometimes. The decision-making has to have, somehow, the ‘values of humanity’ built-in; in a way that can’t be altered or changed — not least by the program itself! To prevent the AI from being too mechanistic and rigid in its ‘thinking’, we’d want to include the capacity for some creativity — some ability to modify itself, come up with new ideas and discard ones that weren’t working. What if this gets taken too far? What if the intelligence discards its original goal, and decides that a new one is more appropriate? Being a human in an existential crisis is bad enough; imagine being a superintelligence that suddenly decides that maybe your function isn’t what you thought it was: that this is unlike the story it was written to be.

You can imagine that maybe an AI would resist any attempts to re-program it with a new goal. Last episode, I proposed — half jokingly — that maybe a superintelligent AI lover could be better than humans by being infinitely attentive, and infinitely dedicated to making you happy. What if you don’t want it to do that anymore? You can see how this could easily go wrong; bringing a whole new meaning to the term “cyberstalking.”

The Global Catastrophics Risk book edited by Bostrom and Cirkovic has an excellent essay on the risks from artificial intelligence, from Yudkowsky.

In it, he talks about the inscrutability of a superintelligent artificial intelligence. He urges us to think about the sphere of minds in general as being a huge expanse, within which the space of human minds is just a small dot. When we think of superintelligence, we think of an Einstein-level AI; but in reality, whatever it concludes will be unfathomable to us. Whatever motivates the superintelligence will be unfathomable to us — and given that, it’s foolish to anthropomorphize the AI — to project our emotions, our motivations, and our understanding onto it. Quote:

“I strongly urge my readers not to start thinking up reasons why a fully generic optimization process would be friendly. Natural selection is not friendly; nor does it hate you; nor will it leave you alone. Evolution cannot be anthropomorphized; it does not work like you do. Many pre-1960s biologists expected natural selection to do all sorts of nice things, and rationalized all sorts of reasons why it would do it. They were disappointed, because natural selection itself did not start out knowing that it wanted a humanly nice result.”

A point I’ve made before: there really is no reason that the result of evolution, or the “natural order of things”, is necessarily the best way for things to be. After all; naturally, we get ill and die. It’s not that evolution is malicious. It’s just an — almost mathematical — result of natural selection processes that optimise in a certain way. There’s no reason to expect AI intelligence, then, to necessarily produce something that is friendly — or destructive — or that has any motivations we can understand.

He also warns against the dangers of unintended consequences.

“The first communists did not have the example of Soviet Russia to warn them… after the revolution, when the communists came into power and were corrupted by it, other motives came into play; but this itself was not something that the first idealists predited. It is important to understand that the authors of huge catastrophes need not be evil, or even unusually stupid. If we attribute every tragedy to evil or unusual stupidity, we will look at ourselves, correctly perceive that we are not eveil or unusually stupid, and say “But that would never happen to us.””

What if the artificial intelligence finds a way to alter its own goals — and, in a sort of self-delusion that should be really familiar to humans — decides to somehow change its own perceptions, so that it thinks it is succeeding; and then it might no longer be limited by what we tell it to do, if you try to programme in some of Asimov’s laws like we talked about last episode?

Of course, this might not be a problem. When AI does emerge, it might genuinely be like us — evolve alongside us, and not necessarily be motivated by blindly optimising a goal. We just don’t know, though. Which, as ever, is why it’s so exciting!

Nanotechnology and robotics, which are often mentioned in the same breath, pose their own threats. Self-replicating nanorobots are a very dangerous prospect. The “gray goo” scenario describes nanorobots that destroy everything; its inventor, Eric Drexler:

“Tough, omnivorous ‘bacteria’ could out-compete real bacteria: they could spread like blowing pollen, replicate swiftly, and reduce the biosphere to dust in a matter of days. Dangerous replicators could easily be too tough, small, and rapidly spreading to stop — at least if we made no preparation. We have trouble enough controlling viruses and fruit flies.” The assemblers would destroy everything to create more of themselves; much like miniature humans, haha.

Of course, he now says “I wish I’d never used the term ‘gray goo’.” And lots of people have more recently said nanotech will be closer to 3D printers — they can manufacture whatever you tell them to, given the raw materials, but there’s no need to make more than a certain amount of the original nanobots, which can be produced by another machine in a controlled manner — then they wouldn’t need to be able to replicate themselves at all. But once this technology exists, you can imagine that it could easily be weaponized — and it would be very difficult to stop. After all, all you need is one, and the power to multiply…

How feasible is all of this, given that the AI of today is so far from being generalized? A lot of it can seem like a good deal of hype — closer than science fiction or fantasy than to something we can genuinely project. In some ways, with its projections of immortality and utopia — a technologically induced heaven — it’s almost like a religion. In his brilliant novel, American Gods, Neil Gaiman imagines a world where all deities are just stories we tell ourselves — but the stories themselves have power, and take on physical form, feeding on our belief. In American Gods, all the old pantheons are there — but these old gods are weakened by new ones, ones we didn’t even realize we were creating. Faith in celebrity, faith in television, faith in the stock market, producing new types of god and new types of belief system. And you could argue that, here, faith in a deity is replaced by faith in exponential growth. And I do not have as much faith as Kurzweil and his followers in the power of limitless exponential growth. I’m not the only one. Here’s singularity skeptic Stephen Pinker:

“There is not the slightest reason to believe in a coming singularity. The fact that you can visualize a future in your imagination is not evidence that it is likely or even possible. Look at domed cities, jet-pack commuting, underwater cities, mile-high buildings, and nuclear-powered automobiles — all staples of futuristic fantasies when I was a child that have never arrived. Sheer processing power is not a pixie dust that magically solves all your problems.”

Perhaps in that last sentence he’s hinting at the idea that it will take humans longer to understand our own biology and neurology, or to come up with anything you could call artificial intelligence.

There are people who make the “evolutionary argument” about how feasible artificial intelligence is. This is very similar to something that Alan Turing, father of the computer, wrote back in the 1950s; after all, he argues: evolution on Earth has been capable of producing intelligence. Surely a process of guided evolution for artificial ‘life’, where we tweak and fiddle and tinker and improve — this would be much more effective at generating something that’s eventually intelligent. We know that consciousness and intelligent can emerge this way from things that aren’t intelligent. But there are a couple of big caveats here. Evolution took billions of years to produce intelligent life in the form of humans; our computing power is not up to the level where we could simulate the grand process of evolution across so many life-forms. And then there’s all these ideas you may remember from the Fermi and Drake episodes. Namely; maybe intelligent life is very unlikely to arise. What if you could run the Earth — as a vast experiment in evolution and natural selection — a million times, and only get intelligent life once? Then, the processing power required to simulate Earth’s evolutionary process might not be enough. You might need to simulate a million Earths.

The other point to note is that technological growth is by no means inevitable, and independent of humanity and human factors. Take space-travel, which we talked about in the last episode. For millions of years, nothing; then, within a few decades, we first have rockets that are capable of going into space — then we have a satellite, Sputnik, launched into space — then the first humans are launched into space in 1961, Yuri Gagarin — then men walk on the moon in 1969. Extrapolate that — especially with the law of accelerating returns that Kurzweil relies on — and you have moonbases, men on Mars in the 1980s maybe… but a combination of the problems proving more difficult to solve than we realized, and the political desire to invest in space exploration declining, and suddenly the exponential growth in this field stops and even seems to reverse. How many Moon landings have there been lately? It may turn out that the logistic function — which starts as an exponential, then flattens as resources run out — is a better model for the growth of technology. Theodore Modis is a champion of this perspective, pointing out that lots of phenomena — populations, stocks and shares, other technological predictions — do turn out to be well modelled by these logistic curves, with a period of exponential growth that eventually flattens out. It’s an S-shaped curve; eventually, the lack of availability of resources will wear us down. The arguments at play here are more subtle than “I really like exponential growth” vs. “I really like logistic growth!” but it’s a good analogy. In which case, will things flatten out before or after we develop these superhuman intelligences? Even the man who invented/discovered Moore’s Law, the law that processing power doubles every 18 months that Kurzweil continually refers to — even he says:

“I am a skeptic. I don’t believe (a technological singularity) is likely to happen, at least for a long time. And I don’t know why I feel that way.”

Martin Ford, who’s very concerned about automation, proposes a different scenario. What if, before we develop this general kind of artificial intelligence — the one that can modify and improve itself, and thus inevitably leads to a singularity — we develop too many of the specialized kinds of artificial intelligence? So, for example, robots can already outperform humans at most mechanical tasks. And algorithms — complex computer programmes that are highly detailed — can be better at things like medical diagnosis, or legal research, without being ‘intelligent’ in the traditional, generalized sense. In Ford’s view, as these technologies improve, more and more people are made unemployed, the economy crashes, and the economic demand for new technologies slumps. Thus the exponential growth in technologies is halted. I guess, like every other apocalypse, we have to be sure that something else doesn’t get us first. We could run out of resources, or succumb to some other apocalypse, before anything like the singularity can happen. Describing anything as inevitable is incredibly dangerous, unless you’re talking about a top-order batting collapse for the England cricket team, in which case you’re fine. But in a world where even extrapolating linear growth is dangerous, there are surely more issues that can come from extrapolating exponential growth.

Specific scenarios have also seen specific refutations. Take the grey goo nanotechnology scenario that we discussed — where replicators go out of control and consume everything to make more of themselves. Martin Rees, who is involved with the Centre for Existential Risk, points out one objection to how feasible such a scenario is:

“Viruses and bacteria are themselves superbly engineered nanomachines, and an omnivorous eater that could thrive anywhere would be a winner in the natural selection stakes. So if this plague of destructive organisms is possible, [grey goo] critics might argue, why didn’t it evolve by natural selection, long ago? Why didn’t the biosphere self-destruct “naturally,” rather than being threatened only when creatures designed by misapplied human intelligence are let loose? A riposte to this argument is that human beings are able to engineer some modifications that nature cannot achieve: geneticists can make monkeys or corn glow in the dark by transferring a gene from a jellyfish, whereas natural selection cannot bridge the species barriers in this way. Likewise, nanotechnology may achieve in a few decades things that nature never could.”

I don’t think it’s a solved problem; and I don’t think you can wave your hands and say that it’s self-evident that all barriers are going to be overcome. But there are some important caveats. “When scientists say something is possible, and give a time-scale, they are usually predicting things ten years too early. When they say it is impossible, though, they’re usually wrong.” And then there’s the classic “technological hype cycle” — when new technologies emerge, everyone expects revolutionary change. This fails to materialize, everyone gets disappointed, and more realistic progress takes place. The same thing is happening now with AI — probably because a lot of things that are really “smart algorithms” are being labelled artificial intelligence when they’re not really intelligent. But these narrow, smart algorithms are everywhere, determining which flavours of spam email you get sent, and when your planes are going to land. They’re getting to the stage of technology where it “becomes invisible” — so widely accepted and adopted, at least in rich countries, that its absence is what you notice. Like mobile phones, and, increasingly, mobile internet. And there don’t seem to be any fundamental reasons why we can’t develop artificial intelligence; after all, we know that supercomputers will be powerful enough to approach the raw computing power needed to simulate a human brain. Plus, human brains are limited by the speed of our hardware. Our neurons can fire 200 times a second, but computerized neurons could fire millions of times faster than that. Our signals are often limited in the physical speed they have, to travel from one part of the brain to another; for computers, this speed is the speed of light. You might not need that much computing power to develop something we might call intelligent. It just might take longer than Kurzweil, who also trumpets that he’s hoping to be amongst the first set of humans to live forever — imagine that, immortal baby-boomers — predicts.

At the moment, artificial intelligence still flatters to deceive. We talked about this in our ‘Seduced By A Robot’ episode; it’s very good at completing narrow, well-defined tasks which respond well to processes that can be turned into an algorithm, and then solved with massive computing power — like playing chess. But turn the system to another task, and it will be useless again; and not all tasks can be reduced to algorithms in this way. Similarly, conversational AI can occasionally convince a human that it’s real, especially if the conversation is narrow. But talk to it for longer than a minute or so and the cracks always start to show. There’s no conscious entity, no learning brain behind these technologies; the spark of what you might call ‘life’ isn’t there, just as it wasn’t there in the automatons that Leonardo da Vinci invented. There’s just fancier, more complex machinery. We can all imagine artificial intelligences; we’ve grown up with them in fiction. But the bridge between where we are now, and a general artificial intelligence could be long: in my mind, it’s almost impossible to say how far away we are. I don’t believe that the current paradigm — with neural networks that learn by being trained on massive datasets — approaches human intelligence. It can produce impressive results, and vastly outperform us in narrow tasks, sure. Humans don’t need massive datasets to learn; even toddlers develop and grow from their limited experience of the world. And it’s not just a question of throwing raw supercomputer power at algorithms until they become a human brain. Even AI researchers who have been in the field for decades, such as Margaret Boden — who started in 1972 — are sceptical about the singularity, arguing that human intelligence won’t be reached for decades to come. Even as I write this, tech nerds are treated to the sight of Elon Musk and Mark Zuckerberg duking it out over how close true artificial intelligence is. I wouldn’t be surprised if I don’t live to see it. If it is invented, it will be the greatest invention, the greatest intellectual achievement of our species. Could it be the last one? There’s still such a long way to go.

One of the less dramatic, world-destroying aspects that might trouble you about the virtual universe I’ve proposed — one where there’s no more hate, no more fear, no more dread — is, well, what would we do with ourselves? Would we end up like those mice, given the button that dispenses heroin, ceaselessly and pointlessly mashing the pleasure button forever, hedonistically stimulating our pleasure centres with elaborate fantasies while our real bodies lie on slabs somewhere? Well, maybe. One thing that’s true is that a lot of utopias have been predicted and miraculously failed to materialise. At the turn of the 20th century, theorists were convinced that the machine would prove the saviour of mankind; everyone would work a few hours a week, with machines doing most of the heavy lifting, and we’d have more time for leisure. Instead, of course, new needs, new desires, and new industries sprung up to take their place. The goalposts for paradise shift all the time; that’s human nature. One thing that’s true of a lot of science fiction utopias is that they seem like they’d be rather dull societies to live in. But this is where the singularity takes us to bizarre and strange places. Once we lose our squeamishness and begin to alter human nature — what will happen? Are we going to view negative emotions as a disability, a disease that can be cured? Or are we going to take a more enlightened approach and simulate Sisyphean lives where we’re allowed to be human by constantly striving for more? Maybe the idea of working towards these arbitrary goals for no real reason other than the sake of it seems depressing to you. (If you’ll allow me to get full Teen on you, MAYBE THAT’S WHAT YOU’RE ALREADY DOING.) But the truth, the wild, glorious truth behind this kind of Universe is — I have no idea what consciousness will *be* when we develop this kind of ability. We’re approaching everything with the paradigms, the understanding, the concepts and the thought process of old humanity. It’s the equivalent of an amoeba trying to imagine what it’s like to be us. That’s what makes some of these ideas so wild. And so, when I talk about this idea that I keep coming back to — that we are incredibly special humans, in this era of ludicrous, unquenchable exponential growth… the era that poses so many challenges and has so many opportunities and is so radically different from all of the eras that came before it… maybe the idea that we might be the last humans doesn’t entail the apocalypse. It could always be the case, of course, that we’re not quite smart enough to improve ourselves, or invent an artificial intelligence; that this promised technological revolution is ever-so-slightly beyond our reach. Maybe none of this can ever happen, or we’ll wipe ourselves out before it can. But maybe not. Maybe we’re going to evolve into creatures we can’t conceive of, as unfathomable to us in personality, capacity and design as humanity would have been to the first creatures to crawl out of the sea. Whatever that would be, it would certainly be the end of the world as we know it.

Thanks for listening etc.