Technological advances almost always come with a price. Not only with the cost of the actual material and labor and research and development, but with an opportunity cost as well. Think of the advantages of a cell phone or perhaps the “OnStar” system that comes on many vehicles. One of the many positives is that in the event of an accident or emergency you can contact, or be contacted, by help if you need it. The downside is that you can be contacted and tracked at any time, not just when you so desire it. Most of us are willing to sacrifice a bit of personal freedom or anonymity in these cases, in order to have the peace of mind. It goes without saying that as we progress further along the technological path, we are presented with more and more moral dilemmas.
Autonomous robots, one of the many progeny brought to us by artificial intelligence, is the next big quandary that scientists, philosophers, and ethicists are beginning to seriously ponder. What was once relegated to the world of science fiction is quickly becoming scientific fact. “Autonomous robots are able to make decisions without human intervention. At a simple level, these can include robot vacuum cleaners that ‘decide’ for themselves when to move from room to room or to head back to a base station to recharge.” While robot vacuum cleaners may sound harmless, robot border guards or security police are not. Just as the checkout girl at the grocery store has been replaced by four “self check-out” machines, so might the front-line forces of the military. Advocates argue that blowing up another country’s machines in an automated “war” is certainly preferred to killing its soldiers, but this is a supreme oversimplification of the problem. When a country is “out” of machines they will send their soldiers anyway; the government owns both man and machine.
The main focus of the Artificial Intelligence (AI) debate hinges on what exactly is meant by consciousness. It is presumed by most outside of the deeply disturbed within the confines of the philosophical community, that humans “have” consciousness because they are “self-aware.” That is, they can relate “themselves” to their surroundings, their environment, and other people.
Some beings are unconscious; some are conscious; man alone is truly self-conscious. It is because he can turn upon himself and examine his own actions or his own decisions that his behavior is inevitably plastic. It is because man can be critical of all ways of doing and thinking, including his own ways, that he exhibits the possibility of truly moral judgment.
The real rub of AI is that self-conscious man cannot replicate his “consciousness” onto his creation. He can “design” a robot that can perform certain tasks, monitor other machines, and even make decisions up to a point, but he cannot impart self-awareness to it. This is the realm of the metaphysical, where scientists fear to tread. Autonomous robots, it turns out, are no more autonomous than the humans who created them. They require input—a software to drive the hardware—that becomes a much stickier issue than simply animating a mass of metal and wires. This becomes an even greater problem when the tasks given to the machines become more complex in the not so distant future. “Manufacturers are exploring ways to make robotic toys look after children, which experts say will lead to child-minding machines able to monitor youngsters, transmitting their progress to the parents by onboard cameras.” And according to Professor Alan Winfield, “the danger is that we will sleepwalk into a situation where we accept a large number of autonomous robots in our lives without being sure of the consequences. The outcome could be that when given a choice the robot could make the wrong decision and someone gets hurt. They can go wrong just like a motor car can.” It is this point in the AI debate that the scientific community is beginning to see the limits of its “matter-only” worldview. What exactly makes a robot “gone bad” any different than a Seung-Hui Cho or an Ed Gein? After all aren’t both just animated matter? Isn’t this just the statistical probability of the matter “going bad?”
Interestingly, the AI debate has “become very influenced by probability theory and statistics” during the last two decades. Scientists admit that consciousness is arbitrary and impossible to measure empirically or observe under the microscope, so they have developed “measuring” devices like the Turing test. While the Turing test does not claim to measure “consciousness,” some have claimed that any device that passes the test is necessarily “conscious.” One would presume that Cho or Gein could have, in fact, passed the Turing test, but does this mean that you would want them watching your children? Statistics have become a major focus of the AI debate because this is as hard as the science can get when dealing with metaphysical aspects like intelligence, consciousness, or self-awareness. If you can decrease the probability that a machine will “go bad,” you can increase the likelihood of success with the AI device. But who wants to entrust their children to statistics? Even the best statistical odds with AI cannot eliminate the “human factor,” which is prone to error, miscalculation, and even downright deceit. All scientists will readily admit that man is not infallible, so what makes us think that we could somehow design a machine that is? Jonathan Glover ends his book Humanity: A Moral History of the Twentieth Century with what amounts to a plea of desperation:
To avoid further disasters, we need political restraints on a world scale. But politics is not the whole story. We have experienced the results of technology in the service of the destructive side of human psychology. Something needs to be done about this fatal combination. The means for expressing cruelty and carrying out mass killing have been fully developed. It is too late to stop the technology. It is to the psychology that we should now turn.
Glover realizes too late that technology is simply a tool in the hands of the real problem: man himself. Glover is actually echoing the sentiments of the late Stephen Jay Gould who in 1994 wrote: “[O]ur ways of learning about the world are strongly influenced by the social preconceptions and biased modes of thinking that each scientist must apply to any problem. The stereotype of a fully rational and objective ‘scientific method,’ with individual scientists as logical (and interchangeable) robots, is self-serving mythology.” Glover and Gould see the problem. Unfortunately for them their humanistic, materialistic worldview could not provide an answer.
 BBC News, “Robot future poses hard questions,” 24 April 2007. Online here.
 David Elton Trueblood, General Philosophy (New York: Harper and Row, 1963), 165.
 Rebecca Camber, “Are we safe from robots that can think for themselves?” Daily Mail, 23 April 2007. Online here.
 Jonathan Glover, Humanity: A Moral History of the Twentieth Century (New Haven: Yale Univ. Press, 1999), 414.
 Stephen Jay Gould, “In the Mind of the Beholder,” Natural History (February 1994), 103:14.