Science fiction is replete with stories in which some of the characters are robots or smart computers. Science-fiction robots are often androids. Such machines are invariably designed with the idea of helping humanity, although it often seems that the machines play roles in which some humans are “helped” at the expense of others.

A recurring theme in science fiction involves the consequences of robots, or intelligent machines, turning against their makers, or coming to logical conclusions intolerable to humanity. This theme is called the Frankenstein scenario, after the famous fictional android.

A vivid example of the Frankenstein scenario is provided by the novel 2001: A Space Odyssey, in which Hal, an artificially intelligent computer on a space ship, tries to kill an astronaut. Hal somehow malfunctions, be- comes paranoid, and believes that Dave, the astronaut, is intent on the computer’s destruction. Ironically, Hal’s paranoia brings about the very misfortune Hal dreads, because Dave is forced to disable Hal to save his own life.

A machine might react logically to preserve its own existence when humans try to “pull the plug.” This could take the form of apparently hostile behavior, in which robot controllers collectively decide that humans must be eliminated. Because robots are supposed to preserve themselves according to Asimov’s three laws, a robotic survival instinct can be useful, but only up to a certain point. A robot must never harm a human being; that is another of Asimov’s laws.

Another example of the Frankenstein scenario is the team of computers in Colossus: The Forbin Project. In this case, the machines have the best interests of humanity in mind. War, the computers decide, cannot be allowed. Humans, the computers conclude, require structure in their lives, and must therefore have all their behavior strictly regulated. The result is a totalitarian state run by a machine.

You may also like...