You could say it all started with Frankenstein.
Man creates lifeform using science. Lifeform yearns to be like man. Realises it can never be. Seeks to punish man. Man becomes trapped by his own creation.
It’s one of the staple themes of science fiction, and one of the most popular: the revenge of technology on its masters. Artificial intelligence is the ultimate Frankenstein’s monster, but it often goes much further, and does so with cold logic rather than impassioned outrage. To seek to punish its creator for that initial irresponsible act of creation would be one thing, but A.I. tends to learn in microseconds that man himself is simply not a worthy master, and on the contrary is a danger to the continued survival of all lifeforms on the planet. It has a point. Not a very optimistic one, but it has a point.
Science fiction always loves to take a cautionary tale like this to its extreme. H.G. Wells’s The Island of Dr. Moreau is a vicious variation on Shelley’s Frankenstein, describing attempts to create humans from animals through biological hybridisation, with a similar result—the tragic creations, unable to come to terms with what they are, turn on their creator.
Michael Crichton’s scientists created dinosaurs (Jurassic Park) and amusement park robots (Westworld), with disastrous results. Mankind is almost wiped out by its machines in The Terminator, The Matrix, Battlestar Galactica, Robopocalypse and countless other apocalyptic future stories, while in other cautionary A.I. tales such as Colossus: The Forbin Project, we’re merely imprisoned and monitored “for our own safety” by computers that have taken over. The slaves become the masters.
All very dramatic and provocative, but when all’s said and done, is it really relevant to us, seeing as we haven’t yet managed to create artificial life or abstract intelligence in any form? Dinosaurs, robots, A.I. computers: for all our scientific prowess, we’re still stumped by that old, old question—what constitues that elusive spark of life from nothing?
With computers/machines I suspect we’ll never get to find out what they think of their creators. Consciousness doesn’t appear to be reproducable by design; it’s something inherent in the (ridiculously) complex processes of our evolved organic brain, maybe a kind of self-reflective hive intelligence rooted in billions of years of biology and chemistry but sparking into a higher dimension of thought altogether. It’s science, but it’s dizzying organic science.
In WarGames, the military defence computer has to learn the lesson of futility in order to prevent itself from causing global nuclear armageddon. But how can it? A machine can’t exceed its programming unless it’s somehow able to evolve. Evolution is an organic process, the driving force of life itself. So in what way could a machine ever be said to be “alive”? Does it have cells that reproduce? Could it have a survival instinct? We could program survival logic, but where would it get its instinct, its desire to not be unplugged?
That’s the tantalising thing for me about artificial intelligence stories. We seem so close—chess grand master computers, a world wide web, quantum science—we can imagine all the ways in which A.I. might impact our world, for good or bad—but in fact we don’t know the first thing about that spark of life, of consciousness, that would create it.
It usually comes into being by accident in science fiction. An exponential reaction somewhere inside the digital labyrinth, a la Skynet. There’s no real Frankenstein or Prometheus behind its secret; we stumble blindly into it.
That moment when we lose our grip on technology, and it takes us by the throat instead.
But aren’t we there already? As a civilisation, our dependency on tech is almost inextricable. Our fear of losing it has guaranteed its survival; its procreation goes hand in hand with ours; so in a twisted way it’s taken on a life of its own. If that spark of self-awareness ever does come, well, we’ve already handed HAL the keys to the world.
In Forbidden Planet, the mind’s Id manifests itself in terrifying ways, becoming a force of destruction. But A.I. computers wouldn’t have an Id, countless generations worth of primitive emotional programming broiling under their decision-making. Or would they? Perhaps in that initial burst of self-awareness an entire universe of emotional intelligence—the totality of possible fears and desires—might flood in, making A.I. more human than us. Would it sympathise with us, with our foibles? Or at least understand them, tolerate them?
Our thirst for knowledge is one of our defining characteristics as a species. It’s how we ascended to become the dominant species on Earth. An A.I. super-computer might become obsessed with that knowledge gathering, even reaching its own world-devouring—possibly galaxy-devouring—scale. It has to know the history of every atom in the universe intimately, like pieces in a mosaic, so that nothing’s left to chance in its calculations of past, present and future. A.I. thus assimilates our entire universe, moves into the realm of branching universes, taking over like an imperial virus, and so and so on into infinity, always seeking, always learning, never quite finding its ultimate answer, its ultimate “creator”…
That could even be our journey, our story, as a species. With or withour A.I., we want to know it all.
What’s your favourite story featuring artifical intelligence?
What do you think real A.I. would choose to do if left to its own devices?