On Artificial Intelligence

You could say it all started with Frankenstein.

Man creates lifeform using science. Lifeform yearns to be like man. Realises it can never be. Seeks to punish man. Man becomes trapped by his own creation.

It’s one of the staple themes of science fiction, and one of the most popular: the revenge of technology on its masters. Artificial intelligence is the ultimate Frankenstein’s monster, but it often goes much further, and does so with cold logic rather than impassioned outrage. To seek to punish its creator for that initial irresponsible act of creation would be one thing, but A.I. tends to learn in microseconds that man himself is simply not a worthy master, and on the contrary is a danger to the continued survival of all lifeforms on the planet. It has a point. Not a very optimistic one, but it has a point.

Science fiction always loves to take a cautionary tale like this to its extreme. H.G. Wells’s The Island of Dr. Moreau is a vicious variation on Shelley’s Frankenstein, describing attempts to create humans from animals through biological hybridisation, with a similar result—the tragic creations, unable to come to terms with what they are, turn on their creator.

Michael Crichton’s scientists created dinosaurs (Jurassic Park) and amusement park robots (Westworld), with disastrous results. Mankind is almost wiped out by its machines in The Terminator, The Matrix, Battlestar Galactica, Robopocalypse and countless other apocalyptic future stories, while in other cautionary A.I. tales such as Colossus: The Forbin Project, we’re merely imprisoned and monitored “for our own safety” by computers that have taken over. The slaves become the masters.

All very dramatic and provocative, but when all’s said and done, is it really relevant to us, seeing as we haven’t yet managed to create artificial life or abstract intelligence in any form? Dinosaurs, robots, A.I. computers: for all our scientific prowess, we’re still stumped by that old, old question—what constitues that elusive spark of life from nothing?

With computers/machines I suspect we’ll never get to find out what they think of their creators. Consciousness doesn’t appear to be reproducable by design; it’s something inherent in the (ridiculously) complex processes of our evolved organic brain, maybe a kind of self-reflective hive intelligence rooted in billions of years of biology and chemistry but sparking into a higher dimension of thought altogether. It’s science, but it’s dizzying organic science.

In WarGames, the military defence computer has to learn the lesson of futility in order to prevent itself from causing global nuclear armageddon. But how can it? A machine can’t exceed its programming unless it’s somehow able to evolve. Evolution is an organic process, the driving force of life itself. So in what way could a machine ever be said to be “alive”? Does it have cells that reproduce? Could it have a survival instinct? We could program survival logic, but where would it get its instinct, its desire to not be unplugged?

That’s the tantalising thing for me about artificial intelligence stories. We seem so close—chess grand master computers, a world wide web, quantum science—we can imagine all the ways in which A.I. might impact our world, for good or bad—but in fact we don’t know the first thing about that spark of life, of consciousness, that would create it.

It usually comes into being by accident in science fiction. An exponential reaction somewhere inside the digital labyrinth, a la Skynet. There’s no real Frankenstein or Prometheus behind its secret; we stumble blindly into it.

That moment when we lose our grip on technology, and it takes us by the throat instead.

But aren’t we there already? As a civilisation, our dependency on tech is almost inextricable. Our fear of losing it has guaranteed its survival; its procreation goes hand in hand with ours; so in a twisted way it’s taken on a life of its own. If that spark of self-awareness ever does come, well, we’ve already handed HAL the keys to the world.

In Forbidden Planet, the mind’s Id manifests itself in terrifying ways, becoming a force of destruction. But A.I. computers wouldn’t have an Id, countless generations worth of primitive emotional programming broiling under their decision-making. Or would they? Perhaps in that initial burst of self-awareness an entire universe of emotional intelligence—the totality of possible fears and desires—might flood in, making A.I. more human than us. Would it sympathise with us, with our foibles? Or at least understand them, tolerate them?

Our thirst for knowledge is one of our defining characteristics as a species. It’s how we ascended to become the dominant species on Earth. An A.I. super-computer might become obsessed with that knowledge gathering, even reaching its own world-devouring—possibly galaxy-devouring—scale. It has to know the history of every atom in the universe intimately, like pieces in a mosaic, so that nothing’s left to chance in its calculations of past, present and future. A.I. thus assimilates our entire universe, moves into the realm of branching universes, taking over like an imperial virus, and so and so on into infinity, always seeking, always learning, never quite finding its ultimate answer, its ultimate “creator”…

That could even be our journey, our story, as a species. With or withour A.I., we want to know it all.

What’s your favourite story featuring artifical intelligence?

What do you think real A.I. would choose to do if left to its own devices?

Advertisements

6 comments

  1. Generally, I like the AI forms in stories that are more like sidekicks. I think it may be a comfort level thing. Maybe it’s a little uncomfortable to think that machines could be almost human in the future.

    1. It’s a creepy idea, for sure. And yeah, the AI sidekick with an attitude is always good fun–a recent one I liked was Tony Stark’s sarcastic AI system in Iron Man. Jarvis, I think it’s called, voiced by Paul Bettany.

  2. Favorite AI story: Star Trek, the Next Generation. I’ve had the biggest crush on Data ever since college.

    After that, I like Johnny Five in the Short Circuit movies. It would be nice to think that a machine programmed for military destruction, when obtaining “life” and self-awareness, would chose to be good and peaceful. A being without drives and desires for food, sex or power would probably be a nice AI. Like Johnny Five and Data.

    But I think in reality, what an AI chooses to do would depend a lot on its original programming. That programming is what stands in for its id, subconscious, instincts, childhood, personality and what we would call “genetic predispositions.” That programming would determine how it perceives and interprets input from the not-self — the world outside itself and the events that happen to it. Programming and design also determine the extent to which the AI can interact with and act upon its environment. Is it programmed to perceive pain? Threat? Can it see? Hear? Preserve itself? Sacrifice itself? Swing a fist? Shoot a gun? Shut down a power grid? Build a replica of itself? What is it programmed to value? All of that and more would determine what it chooses to do.

    1. Ah, you’ve given this a lot of thought, too, Jen. That idea of exceeding its original programming interests me. Like you say, all those attributes that constitute its “consciousness” must be present in the programming to begin with. And it’ll either achieve self-awareness when it’s first switched on, in those initial microseconds, or it never will?

      Data is awesome. Some of my fave ST:TNG episodes centre on him. The one where he creates a daughter throws up all sorts of great questions about AI. And he’s consistently the most interesting SF element of the show for me. Apart from Troi and Bev Crusher, that is. *ahem*

      Johnny Five dancing to “More Than a Woman”…one of my first movie memories.

      1. I have a psych degree. 🙂

        ST:TNG was on the air while I was in college, and one of the classes I had to take was Cognition, which was taught by someone who also happened to be an AI programmer. I also took courses in Psychobiology, Neurology, Behavioral Psychology, and studied the biological basis of emotion, chemical feedback loops, conditioning and that sort of thing. I did spend a lot of time applying it to Data, robots, and sci-fi. lol

  3. In the field of science-fiction, there is also Iain M. Banks’ Culture cycle, in which the galactic civilization called the Culture relies heavily on artificial intelligences. If such a civilization can be conceived as a sort of “computer-aided” anarchy, it seems to give a more optimistic view. For an analysis, see for example: Yannick Rumpala, Artificial intelligences and political organization: an exploration based on the science fiction work of Iain M. Banks, Technology in Society, Volume 34, Issue 1, 2012, http://www.sciencedirect.com/science/article/pii/S0160791X11000728
    (Free older version available at: http://www.inter-disciplinary.net/wp-content/uploads/2011/06/rumpalaepaper.pdf )

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: