Time Magazine’s Newsfeed reports an interesting encounter with a robo-call from a health insurance telemarketing company. What’s so interesting about a robo-call, you ask? Answer: this one overtly lies to you. Specifically, the clear female voice at the other end denies that she is a robot, and appeals that she is indeed a “real person.”
Technically, it’s probably correct. After all, corporations are “persons,” too, legally. But there’s something eerie about this incident:
The phone call came from a charming woman with a bright, engaging voice to the cell phone of a TIME Washington Bureau Chief Michael Scherer. She wanted to offer a deal on health insurance, but something was fishy.
When Scherer asked point blank if she was a real person, or a computer-operated robot voice, she replied enthusiastically that she was real, with a charming laugh. But then she failed several other tests. When asked “What vegetable is found in tomato soup?” she said she did not understand the question. When asked multiple times what day of the week it was yesterday, she complained repeatedly of a bad connection.
Over the course of the next hour, several TIME reporters called her back, working to uncover the mystery of her bona fides. Her name, she said, was Samantha West, and she was definitely a robot, given the pitch perfect repetition of her answers.
This greatly interests me because it highlights the central fallacy of artificial intelligence, and that is, ethics. The creation can only be at best as ethical as its creator. Mankind originally was, but fell due to the works of Satan and man’s own devices. Man chose autonomous ethics at odds with His creator’s.
When fallen man attempts to create computers in his own image, the great danger is the lingering effects of the fall. Man is a murderer, blasphemer, adulterer, thief, and liar. When he creates robots in his own image, there will be great temptation to create them as murderers, blasphemers, thieves, and liars.
In short, ethics must always precede intelligence, and intelligence always operates upon some presupposed standard of ethics. The robot can never transcend the level of ethics with which it is programmed—i.e. commanded.
In this case, we have a robot cleverly programmed to lie about being a robot. It is meant to fool people into providing personal information which the company could then use to market health insurance to you. The main barrier to success is how well the robot is able to lie. Can it pull off the task of sounding human enough that people will not notice?
This is commonly referred to as the Turing test. As described below, pioneer computer specialist Alan Turing proposed, basically, that if a computer can communicate with a human being in such a way that the human cannot tell the computer is not human, then that machine can be said to “think” like humans think.
From this came a flood of studies that have since formed the fields of artificial intelligence, as well as plenty of science fiction. These efforts are essentially man’s attempt to create a being in his own image—or perhaps even evolve himself into a super-human of some sort.
What we see in this particular instance is how the effects of the fall will inevitably tarnish that creation. Mankind, the liar (Rom. 3:4), cannot resist the temptation to create liars in his own image, and for the purposes of his own deceit.
I dealt with the issue of artificial intelligence briefly in my book Biblical Logic—specifically under the fallacy of false analogy. In many areas, we see AI fail to measure up to the fullness of a human “person.” Unfortunately, one of those areas is not sin. It’s just one more reminder that the works of man’s hands are all tainted with his own depravity—some more openly than others.
The scary thought is the realms of possibility regarding depraved ethics which this opens. Consider: drone warfare.
Here is my section from Biblical Logic (pp. 189–192):
One of the most powerful False Analogies in modern times results from the advancement of artificial intelligence (AI). Based on a resemblance in function between a computer and a human brain, many unbelieving scholars, writers, and entrepreneurs have created a strained analogy and loaded all their hopes and efforts into AI as the avenue to human advancement (evolution). On atheist in particular writes,
Computers obviously do not think quite like humans, and I do not claim that the computer of today  is necessarily a valid model of the human brain. But computers process data and make decisions based on that data, which is all that the human brain does under the label of thinking. A prime area of study today is artificial intelligence (AI), and its practitioners harbor few doubts that someday computers will be made to do all the operations normally associated with human intelligence, and many more. If the intelligence that results is not strictly human, that is not to say that it will necessarily be inferior. Perhaps artificial intelligence will be superior, with characteristics and capabilities the human mind cannot even imagine.…
Future computers will not only be superior to people in every task, mental or physical, but will also be immortal.…
If a computer is ‘just a machine,’ so is the human brain.…1
This case involves atheist author Victor J. Stenger discussing the future possibilities of artificial intelligence (AI), and in the process, creating a False Analogy. Stenger works from one point of similarity—that both computers and brains process data and make decisions based on that processed data—but takes too much liberty in allowing other points to necessarily follow. For example, he claims that if we can call a computer “just a machine,” then we must also say so for the brain. Besides dabbling in Reductionism, the point does not follow at all. By the same logic we could argue, “If a computer plugs into a wall socket, so does the human brain. If a computer wears out in five years, so does a human brain.” We can legitimately classify a computer as a machine. We cannot, however, place the human brain under the same category. Yes, computers do some things that brains do, and brains do some things that computers do, but one given similarity does not necessarily imply any others. To claim otherwise commits the fallacy of False Analogy.
While disavowing that the computers of his day can reasonably compare to a human brain, Stenger holds that in the future, technology will catch up with nature. This certainly holds true to a degree. Chips work much faster and more powerfully today than when Stenger wrote twenty years ago. “Moore’s Law”—formulated in 1965—so far has accurately predicted the exponential increase in computing power for microchips. Based on this regular progress, many futurists project that by 2035 (give or take), computer chips will reach the processing speed of the human brain. Then, it will proceed beyond that power at exponential speeds.
One theory argues that once these superfast machines are programmed to design other superfast machines, they will soon leverage their superior “brain power” on their own how to create even more superior machines. In theory, we will have robots creating robots. One of the early writers on the future of AI concluded that “the intelligence of man would be left far behind.… Thus the first ultraintelligent machine is the last invention that man need ever make.…” For those of you who, like me, have visions of Terminator from this theory, the professor candidly adds, “provided that the machine is docile enough to tell us how to keep it under control.”2 Many other authors and scientists have joined in the vision, including such famous names as Isaac Asimov, Carl Sagan, Ray Kurtzweil and novelist Verner Vinge.
Perhaps the most famous contribution to this field came from Alan Turing, who is considered the father of modern computer science. Turing proposed that if a computer program, under certain conditions, can imitate human intelligence closely enough to fool a human judge, then we should accept the idea that machines “think” in the same way we do. This creates quite a debate among philosophers and scientists, and has created a massive wave of scholarship within the relatively new discipline of neuroscience. Computer scientists, physicists, psychologists, philosophers, and neuroscientists constantly buzz about the future of AI.
All these debates and theories aside, however, the question will always remain whether something successfully imitating human intelligence indeed equals human intelligence. Even in such a wild future as these scholars, including Stenger, imagine—even if AI far surpasses the abilities of the human brain in many ways—Stenger’s analogy will still face the risk of being False. The human brain may yet have capacities and means that the binary computations of a computer chip can never approximate, despite outward appearances in function, performance, or subjective evaluations by humans.
I mentioned the fallacy of Reductionism in regard to Stenger’s claim. Fueling his bad analogy, the Naturalistic Fallacy at the base of his worldview basically reduces all of reality to physical reality. In order to remain consistent, Stenger must argue that “processing data” in the same way that a computer does “is all that the human brain does under the label of thinking.” But is this really true? Can we really reduce human “thinking” to a purely physical process? Christian scholar Stanley L. Jaki, in his book Brain, Mind and Computers,3 argues against such “radical reduction of thought processes to physical ones.”4 He cites, among countless others, the classic philosophers including Pascal. Pascal created one of the earliest adding machines—a proto-computer—and had clear thoughts about the possibility of AI. Of his machine he said, “A calculating machine achieves results that come nearer to thought than anything done by an animal. But it does nothing that enables us to say it has will, as we say animals have.”5 Jaki essentially concludes that despite faster and more complex computers, equating mechanical computation with human thought ignores too much evidence to the contrary—that “the machine analogy of mind is patently insufficient to account for” human intuition, comprehension, meaning, and judgment, among other things.6
Forcing the naturalistic worldview and reducing all thought into its artificial mold produces all kinds of fallacies; Stenger’s False Analogy of AI is just one expression of them. The computer-brain idea is definitely artificial, but definitely not the best of intelligence.
- Victor J. Stenger, Not By Design, quoted in David A. Noebel, Understanding the Times: The Story of the Biblical Christian, Marxist/Leninist, and Secular Humanist Worldviews (Manitou Springs, CO: Summit Press, 1991), 128. [↩]
- I. J. Good, “Speculations Concerning the First Ultraintelligent Machine,” Advances in Computers (1965) 6:33; See http://en.wikipedia.org/wiki/I._J._Good, accessed December 30, 2008. [↩]
- Stanley L. Jaki, Brain, Mind and Computers (New York: Herder and Herder, 1969). [↩]
- Jaki, Brain, Mind and Computers, 17. [↩]
- Quoted in Jaki, Brain, Mind and Computers, 23. [↩]
- Jaki, Brain, Mind and Computers, 250–251. [↩]