"...transferred into a suitable inorganic structure, the pattern of neuron and molecular traces and their relationships that is [a man's] inner self becomes potentially immortal."
-Poul Anderson, Genesis (New York, 2001), p. 126.
Does it? I do not think of my inner self as a pattern of neuron and molecular traces but that is because I subjectively experience my inner self whereas we externally observe patterns of neuron and molecular traces. What I am aware of as an inner self differs in every conceivable way from what I perceive as a pattern of neuron and molecular traces. Describe a privately experienced inner feeling, then describe a publicly visible pattern of traces, then ask what the feeling and the pattern have in common. However, presumably a single process can be both subjectively experienced and objectively observed and will appear differently in each case.
If what is transferred to the inorganic structure is merely an accurately detailed image of the neuronic pattern, then, since images (photographs, films, drawings, reflections, diagrams etc) are not conscious, this image is not conscious and therefore is not an "inner self." However, if what exists in the inorganic structure is a pattern that has exactly the same effects as the neuronic pattern, then it should generate consciousness and thus should duplicate the man's "inner self." So what inorganic structure would be "suitable" for this duplication?
How do we know what is conscious? How do we know that a string puppet or a clockwork toy is not conscious? Because we can account for its movements without assuming that it is conscious. The assumption that an organic being who speaks, converses, displays emotion, screams in pain when struck etc is conscious is a far simpler explanation of that being's behavior than the assumption that he is an elaborate automaton merely simulating conscious responses. If an unconscious automaton were able to pass the Turing test, then I would have a major philosophical problem.
For previous discussion see here and here.
5 comments:
Hi, Paul!
The sheer complexity of the mind/body problem you have touched on is a major reason why I am skeptical about the possibility of self aware, conscious AIs and speculations about humans uploading their personalities/memories into an AI. Mind you, Poul Anderson at least makes such speculations SEEM possible both here in GENESIS and in the HARVEST OF STARS books.
Sean
Sean,
If an artifact can exactly duplicate, not merely simulate, brain functions, then the artifact can be conscious but that is a big if! To exactly duplicate brain functions, it might have to exactly duplicate a brain. Therefore, it might have to be a brain, i.e., to be organic, not inorganic!
Paul.
Hi, Paul!
Exactly! Too many unlikely "ifs" for most of us to think AIs and uploading human personalities and memoies into them for me to think they are LIKELY.
Sean
I recall being on Compuserve's Science Fiction Forum when Poul was invited to answer questions from members.
The topic of AI came up including the book "The Emperor's New Mind" by Roger Penrose, which postulates that consciousness is 'non-algorithmic' and quantum mechanics plays an essential role in consciousness.
Poul noted that if so this would make true conscious computers impossible, but I noted that just means we would take a few extra centuries to create a device which duplicates the performance of a conscious brain.
Kaor, Jim! And I still remain skeptical. And I suspect Anderson would be inclined to agree with me. Ad astra! Sean
Post a Comment