The debate over artificial intelligence (AI) is really a philosophical one. Although it has all the earmarks of being about technology, the technology itself is really beside the point. The technological advances in the interrelated areas of computer science and robotics have brought the debate home in a real way, instead of being merely theoretical and futuristic. The decisions we make now will certainly have ramifications for the future of our children and grandchildren, but they also will have an immediate impact on our own tomorrow. The AI debate is one we must wrestle with and come to conclusive answers now because the technology is pressing ahead, whether we want it to or not.
Science fiction, as a genre, has the benefit of looking ahead to technological possibilities (and impossibilities), but also has the misfortune of having to put these technological advancements into some sort of coherent narrative. In other words, science fiction gets to imagine about how various inventions and advancements may look and operate in the future, but they must also somehow incorporate these “things” into the world of human beings and answer questions about how the technology might or might not change the future for humans. For the most part, science fiction writers usually assume that humans in the future want the same things that humans in the present want, i.e. life, liberty, and the pursuit of happiness. Intriguing sci-fi stories are most often those that show how future technology will either help or hinder the achieving of those goals.
Isaac Asimov was a prolific science writer as well as a popular writer of science fiction. Although he considered himself to be an atheist, he well understood the need for social restraints in the form of laws. In his famed “Robot” series, Asimov devised the “Three Rules of Robotics” which all robots were compelled to obey. The three laws are:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Asimov clearly understood that introducing robots into the physical world—working alongside and interacting with humans—would not be without its problems. He was under no illusion that robots would be able to exist without laws to govern their behavior. Asimov’s three laws are actually quite similar to another set of laws that have been given by a higher authority:
Hearing that Jesus had silenced the Sadducees, the Pharisees got together. One of them, an expert in the law, tested him with this question: “Teacher, which is the greatest commandment in the Law?” Jesus replied: “Love the Lord your God with all your heart and with all your soul and with all your mind. This is the first and greatest commandment. And the second is like it: Love your neighbor as yourself. All the Law and the Prophets hang on these two commandments” (Matthew 22:34-40).
Jesus sums up the entire teaching of the Law and the Prophets with these two commandments. Note that these two “greatest” commandments actually contain three subjects: God, neighbor, and self. From these two commandments of Jesus, St. Augustine argued that a cosmic order of priorities existed with God at the top, other people in the second position, and our own selves in last place. Notice the similarity to the three robotic laws where Asimov demanded that robots must first love their creator (humans in this case), obey their creator, and protect their own existence last of all. Asimov’s order of priorities (or directives) for his robots is really nothing more than a co-opting of God’s Law for His people.
It is at this point that we are able to understand how the binary, black and white logic of the machines is able to “go bad.” As we saw last week, the computers that were supposed to protect people became the predators as they attempted to reconcile the actions of the humans with their own prime directives. When VIKI (the supercomputer in I, Robot) makes the logical decision—in an attempt to follow her programming of being “three rules compliant”—that mankind as a whole would be better off if certain men were eliminated, her “thinking” makes perfect sense. When ARIA (the supercomputer in Eagle Eye) takes VIKI’s logic to the federal level, her “thought process” is also perfectly rational. What Asimov didn’t take into account with his first law was that some men need to be harmed in order to stop harming others. The computers—especially the ones that are “three rules compliant”—see this conflict almost immediately. The utilitarian compromise becomes the only way to reconcile the conflict between programming and reality.
As Christians, we understand that a conflict also exists between the program language of God’s “prime directives” and the reality of His people actually doing them; it is called sin. As fallen men we understand that something is wrong with the “software” that causes the “hardware” to act in rebellion to its original design. We have names for people who don’t recognize this problem; we call them psychopaths. “Psychopaths lack a vital component of the human personality that most other people take for granted—conscience. Either they have no conscience, or their conscience is too weak to inhibit the violence they commit. Psychopaths kill without guilt and without remorse.” In other words, psychopaths are not bothered by consequences or ethics. Like VIKI and ARIA, they act in accordance with their programming. One such psychopath, Ted Bundy, came to understand the problem and verbalized it—just hours before his execution in the Florida electric chair—in an interview with Dr. James Dobson.
When asked if he thought he deserved to die, Bundy responded, “That’s a very good question. I don’t want to die; I won’t kid you. I deserve, certainly, the most extreme punishment society has. And I think society deserves to be protected from me and from others like me. That’s for sure. What I hope will come of our discussion is that I think society deserves to be protected from itself.” Before his death, Bundy recognizedthat the monster is not outside of us, as humanistic psychology would have us believe, it is within each one of us. He came to realize that something is desperately wrong with the heart of all men, not just his own. When Ted Bundy was electrocuted at 7:15 AM the next morning, he died knowing that Jesus Christ was the only healer of sick hearts. If Bundy’s own confession of Christ was genuine and not just a publicity stunt, he is currently residing in a mansion prepared by Jesus Himself. When the machines of men act in strict accordance with their programming—like VIKI and ARIA—man seeks to eliminate them. But when God’s creation acts in accordance with its own sinful tendencies—as Ted Bundy did in his nationwide killing spree of dozens of women—God seeks to redeem them.
1 “Three Laws of Robotics,” Wikipedia.com. This entire article is worth reading to get a sense of how the laws have been modified and revised over the years, showing that even laws for robots are not free of the slow death of a thousand qualifications.
2 Also known as the “I am third” principle. This was Gale Sayers credo and was the title of the book that he wrote about his friend and teammate Brian Piccolo, the “Brian” of the 1971 movie Brian’s Song.
3 Often summarized as the “greatest good for the greatest number.”
4 Time-Life Books, True Crime: Serial Killers (Alexandria, VA: Time-Life Books, 1992), 5.
5 From an edited transcript available online at: http://www.pureintimacy.org/piArticles/A000000433.cfm