If you thought there was nothing intellectual to draw from an Arnold Schwarzenegger action franchise, read on. To honour the DVD release of the underrated fourth film in the series, Terminator: Salvation, TORO sat down with world-renowned science-fiction author Robert J. Sawyer to discuss some of the very heady themes posited by its creative team.

Ottawa-born Sawyer has published 18 novels and won countless genre awards, as well as recognition from many members of the scientific community. His latest trilogy -- called the “WWW” series (Wake is out now, with Watch and Wonder to come in 2010 and 2011, respectively) deals with an independent intelligence emerging from the internet, and it's surprisingly beneficial effect on humanity.

Unlike James Cameron, Sawyer seems to believe in the positive possibility of artificial intelligence, but that was just one of the subjects we touched on in this very involved conversation.

Q: Let’s start with the basic concept of the Terminator series: that we will see robotics evolve into a sentient existence. Given what we know about technology now, is this foreseeable in the near future?
It’s not only foreseeable, it’s inevitable. There’s no question that as the internet -- or “Skynet” -- or some interconnected network of computers gets more and more complex, it’s going to equal and then exceed the number of interconnections there are in the human brain. We have proof of the concept that when that complexity happens, consciousness emerges. What’s the proof of concept? Us. Forty thousand years ago, our brains became so sufficiently complex that self-aware, self-reflected consciousness emerged in us. We started making cave paintings. We started wearing makeup. You can’t paint antelopes in caves unless you have an “inner eye.” The antelope’s not there, you’re painting from memory. You don’t wear makeup unless you have a sense of self and what other people are perceiving about you. We know that, given sufficient complexity in a system, consciousness will emerge. Skynet is the connected network of military computers in the Terminator universe, and something like that will indeed reach the level of complexity this decade, next decade on the outside.

Q: Will it be “accidental” as in Terminator?
I think that’s the most interesting thing about the series; hitherto, almost every story about artificial intelligence in science fiction had it as a deliberate attempt; Asimov’s Robot series, HAL from 2001 being created in a lab in Urbana, Illinois. I think the Terminator series was the first to posit that this could happen spontaneously, and I think that’s far more likely to be the way it’s going to happen. My current novel, Wake, is about the World Wide Web becoming self-aware. Nobody designs it to happen, it just happens, an emergent property of its complexity. I take the positive spin, that we will not only survive but thrive when that happens. Terminator takes the negative spin, where we’ll be subjugated and destroyed by it. It does matter a great deal which one comes to pass, but you need both visions. You need the cautionary one to say “here’s what to avoid,” and you need the positive one to say, “here’s what we’re working towards.” And that’s what the sweep of science-fiction does, it gives you all the possibilities, and you go after one or the other. Terminator is obviously the cautionary tale: if you let machines gain consciousness, they are going to be smarter than us. They may start off dumber, but ultimately they’re going to be brighter. If they’re brighter than us, by definition, we are no longer difficult to outwit.

Q: And they will decide that we are “useless”?
In Terminator, they didn’t just decide we were useless, they decided we were an actual impediment. Useless you can live with. There are all sorts of useless people on Earth, we don’t round them up, we just let them become lawyers. In the Terminator universe, we are actually dangerous. Why? We go around making war, we blow things up. What are the fundamental things that a computer network needs? Stable source of energy, nobody to blow up the planet, no electro-magnetic pulses, which wreak havoc with electronics. What’s the No. 1 cause of instability in the availability of energy sources? It’s us, and geopolitics. What’s the No. 1 cause of nuclear holocaust? What’s the No. 1 danger to computers? Us. In the Terminator model -- and it’s the same in my novel -- it’s emergent. Nobody has set out to create this. What is consciousness? Nobody actually knows what consciousness is, we don’t have a good working definition of it, but it seems to be something that emerges from complexity. Certain things are not conscious -- a rock isn’t conscious. A paramecium, single-celled organism has no consciousness. But at some point, you get very dim self-awareness, and at some other point, very complex awareness. But what the Terminator movies are saying is not just “don’t monkey around with a computer in your basement,” but also that without safeguards, it’s going to happen anyways. Complex systems are going to become self-aware, and you’ve got to watch out for that. Skynet “woke up” of its own volition and decided we were a problem for it.

Q: What do you mean by “safeguards?” Legal standards?
Absolutely. Right now, there are all kinds of legal hurdles that have to be overcome before you can do an experiment involving a pharmaceutical, or a new surgical procedure. All kinds of safeguards are put into place, and companies play by those rules. There are no safeguards in the computer industry, apart from the normal ones about avoiding accidents in the workplace that apply to anything. But if you say, “I want to build a computer that’s smarter than any human being,” they say, “have fun!” If you were to try and make a drug to cure cancer, you would have to adhere to an enormous regimen of regulatory concerns, before you can even do animal testing. And we have what could conceivably be the downfall of the human race: the emergence of true artificial intelligence happening in a completely unregulated environment.

Q: An intelligence greater than our own?
Our intelligence has not advanced one iota in 40,000 years. The human brain has not gotten bigger. So we’re kind of stagnant. But computer power doubles every 18 months -- that’s Moore’s Law. If it doubles, and Terminator is set in 2018, that means that on Canada Day, 2019, computers will be twice as smart as we are. Moore’s Law. Whereas, you’re no brighter than Aristotle, and neither am I. We haven’t made progress in how bright human beings are.

Q: What about emergent properties of our intelligence, like morality?
That’s a very interesting question. Where did that come from, for us? Morality, altruism, ethics, and their flip-sides: belligerence, greed, violence. I think that’s what the saving grace of the human race might be, what I posit in my series of novels. Not that our positive natures came out of our evolution, but our negative ones: we are competitive, because we evolved in a scarce environment. If I out-compete you, I eat, you don’t. We form cliques and warring parties because we recognize in a small group we can defend ourselves better against another small group. On a genetic level, I don’t care what happens to you, I care what happens to my immediate family. That’s programmed in by biology. There’s no reason to think a computer is going to have that competitive nature, that computers will band together, or have any innate desire to advance its relatives, because it doesn’t have any. Computers can see rational decisions without Darwinian baggage, so we might see altruistic computers.

Q: Should we make standards of rights for this new intelligence now? Soon enough, it will want individual rights, the right not to be controlled by another influence, in this case, us.
Yes, but it might not help us anyways. Yes, we should be proactive about our future, if we are concerned that machines will exceed our intelligence in the near-future. Shouldn’t we be taking steps to ensure that is a contained phenomenon? There are two reasons why that might not do us any good. The first is that once something becomes more intelligent than you, it wins. It wins every battle of wits. So no matter what plan you put into place, it sees the flaws in your reasoning. The second reason is: we don’t have a world government. We don’t have standards of law everywhere for any one thing. You could try, here in Canada, to outlaw human cloning for instance, but they’ll do it in Korea, or Thailand, or God-knows-where. And unlike large-scale projects like the development of the atomic bomb or going to the moon, which can’t happen in small spaces, breakthroughs in genetics or programming can happen in a single room. Someone’s garage. How on Earth do you contain that?

Q: Will this intelligence have a sense of self-preservation, basically not wanting to cease existing or to “die?”
In us, you could say that comes from our Darwinian engine, but it can’t just come from that. The Darwinian engine says survive long enough to reproduce, but we don’t all commit suicide after we have babies, or after we are biologically reproductive. Post-menopausal women still want to live. There’s more to it than that. Survival seems to be something that comes along with intelligence, and isn’t just “selfish genes,” despite what Richard Dawkins would have us believe. So there clearly is a desire that goes along with being self-aware. I think therefore I am, go one step further: I think therefore I want to keep thinking. That will be inherent in any conscious entity, whether it’s replicative, or not. Skynet, or whatever it’s going to be, is going to look around and think “what are the impediments to my survival?” And we better not be on that list.

Q: What are our standards to judge when this evolution has actually happened? When is a computer more than a computer?
Alan Turing did the work for us about a half-century ago, the father of modern computing. He devised the “Turing Test”, which is doing exactly what you and I are doing right now, but put a wall between us. You can hear me but you can’t see me, ask me any question you want, and judge by my answers whether I’m a machine or a human being. Right now there’s no computer on the planet that can fool a skilled questioner about whether or not it is a computer. We saw this test, though they didn’t call it by name, in Blade Runner. A whole bunch of questions are asked of Rachel, and they determine that she is an android. If by reasonable standards you can’t determine whether something is or is not human, it is human, even if its made out of silicon.

Q: Or it’s something else entirely, beyond human.
It’s not homo sapiens, but the sweep of history has been about expanding the definition of a person. Take the American example: the guys who wrote “all men are created equal” really did mean men and not women, and by men, they meant white men. Slave-owners were able to sit down and write those words without seeing the irony of it. What we came to recognize is that blacks and women are people, too. Younger people are fully-formed people. There will be a point where we don’t draw the line of “human” as being “genetically human.” It’s much more interesting to ask what is “behaviorally human.” The debate about the human condition, for centuries, has been about who else gets to be human. The 10 commandments, when it says “honour thy neighbour” and “thou shalt not kill,” it only referred to people in your tribe. It was still perfectly acceptable, under the law of that time, to kill people from other tribes.

Q: So it’s inevitable that these new robotic beings will suffer through a period of persecution?
Yes, because everybody wants to feel special, and nobody gives up that specialness easily. It was not easy to get slavery abolished, nor to get women the right to vote, and it has not proven easy to get gays the right to marry. Even if it’s a non-zero-sum game. Ascribing 10 per cent of the population the right to marry doesn’t take away anyone else’s right to marry. Machines, even if they are beneficent, will have a period of being treated like property. There will be civil rights battles, and ultimately the decision will have to be made that you cannot shut off consciousness without due process of law.

Q: What about the integration of human flesh with mechanical adaptations?
We make more progress every year in doing that. Mechanical parts -- knees, hips -- that don’t have any electronic element to them are routinely done now, as of course are dental implants. Electronic implants exist -- pacemakers. Are you are a “cyborg” by definition if you have a pacemaker? By definition, yes, you are part cybernetic organism, and if you remove the pacemaker you cease to live. We’re going to be able to replace more and more parts as time goes on. You will see the first fully artificial human within a few generations. Our bodies are not durable, and most of us have a desire to live longer than our physical forms are capable of living, in a robust way. One way is through cybernetic devices.

Q: Will that pose a problem, our promotion of life through mechanical integration?
You can have three things. You can have infinite life, infinite reproduction, or you can stay put on Earth. You can’t have all three of them. If you want to have infinite life, you can’t have infinite reproduction. We don’t have the wherewithal to support an infinite mass of people on the planet. Ultimately you run out of resources.

Q: Let’s shift to another element of the Terminator series: time travel. What stage are we are at in terms of achieving anything close to what science-fiction generally presents?
No one has ever moved anything backwards in time. Not one nanosecond. Right now all we have are mathematical elements that say we might be able to. There’s long been an argument that if you throw something into a black hole, it might come out at some other point in space, or time. The gross physical world we live in seems not to be conducive to time travel, in that our history shows no evidence of anyone ever coming back in time and tampering. Why wouldn’t you have warned the captain of the Titanic, if you could have? Why didn’t somebody come back from the future to assassinate Hitler? Because they couldn’t, certainly not because they didn’t want to. We live in a universe that shows all the scars of reality; when you cut something, it bleeds and it doesn’t get fixed.

Q: Do we have an understanding of what time “is” beyond our day-to-day, hour-to-hour perception of it?
There is some pretty good mathematical understanding of time. Time is a dimension, like the three dimensions of physical space, with an unusual characteristic: you can only move along it in one direction. There’s an arrow to time that there isn’t to width, height, or depth. Why time is so different is a very interesting question.

Q: In your opinion, what fictional depictions of time travel have gotten it closest to being “right?”
I am going to give props to the Terminator series; they end up with this endless cycle of people trying to undo what others have already done. And I think they were as attentive as they could be to the paradoxes of time travel. They recognize the continuous loops.

Q: There’s a film out now called Collapse, a documentary about journalist Michael Ruppert, who posits that before all of these advances happen, we will see a drainage of natural resources. All of this will become moot because humanity will regress back to a point of basic survivalism.
If we allow that to happen. You don’t have to be Al Gore to see the foolishness in depending on a finite energy resource that mostly comes from politically unstable regions. That’s just nuts, but we can choose to not be just nuts. What Terminator: Salvation ends up saying is that there is a part of us -- they call it the human heart -- that will never be replicated by any machine. That is the ability to think beyond your own personal selfishness. All of the emotions we ascribe to the heart -- love, compassion, selflessness -- are ones that are altruistic. And that will be our salvation.

0 Comments | Add a Comment
*Your Name:
*Enter code:
* Comment: