The People of The Wind., V.
Daniel Holm argues that the Terrans might get a robotic spaceship to:
"'...land like a peaceful merchantman. Consciousness-level computers aren't used much any more, when little new exploration's going on, but they could be built, including a suicide imperative. That explosion would be inside a city's force shields; it'd take out the generators, leaving what was left of the city defenseless; fallout from a dirty warhead would poison the whole hinterland.'" (p. 491)
Two Issues
AI
Rule-governed manipulation of symbols is neither knowledge of their meanings nor consciousness of anything else. Therefore:
computational processes, whether mechanical or electronic, merely simulate but not duplicate intelligence;
computational processes that are merely more "...sophisticated..." (p. 491) will not thereby rise from an unconscious to a conscious level.
However, an artifact that duplicated brain processes would necessarily be conscious and such an artifact might also perform computational processes.
Morality
Morally, to build a conscious and intelligent artifact with a suicide imperative would be to commit murder.
1 comment:
Kaor, Paul!
And none of our computer systems (hardware/software) has duplicated the mind like that. And I am skeptical that is even possible. Moreover, the Imperials did not use a ship with an AI programmed to have a suicide imperative. Which argues for the Terrans deciding there were some things they could not decently do even in war time. So Daniel Holm was wrong.
Sean
Post a Comment