"The Children's Hour," Chapter I.
I address this issue every time it recurs so maybe it becomes a bit samey:
"'And we're putting in a Class-VII computer system.'
"Jonah raised a brow. Class-VII systems were consciousness-level; they also went irredeemably insane sometime between six months and a year after activation, as did any artificial entity complex enough to be aware of being aware." (pp. 149-150)
A mere computer, however high its "Class," is not conscious any more than a mere automobile, however advanced its model, is not conscious. However, maybe some artificial entities can both become conscious and be combined with or incorporated into computers, automobiles etc?
Complex neural interactions, or some analog of them, are necessary for self-consciousness but sensory consciousness is generated by mobile, sensitive organisms interacting with their environments, not just by internal complexity. The quoted dialogue partly recognizes this because it states that complexity generates not mere awareness but awareness of awareness.
7 comments:
Kaor, Paul!
And the points you listed are why I remain skeptical about AIs ever becoming a reality. But of course neither of us objects to writers like Anderson, Pournelle, Stirling, etc., examining such ideas.
Ad astra! Sean
Sean,
I present arguments against computers becoming conscious but not against some kind of artifact becoming conscious.
Paul.
Kaor, Paul!
But computers are artifacts. And I think it will be necessary to use computers if any AIs are ever to become real. Something I am still skeptical about!
Ad astra! Sean
Sean,
But all artifacts are not computers. There are reasons for saying that a computer cannot be conscious but a separate argument would be necessary to demonstrate that a different kind of artifact would never be able to duplicate the functions of a brain.
Paul.
Kaor, Paul!
And I still believe some kind of "computerizing" artifact would be necessary before any kind of AI might work. Assuming AIs are even possible.
Ad astra! Sean
Sean,
Within a computer, a process essentially similar to this occurs:
a tape with rows of punched holes unwinds past a camera;
5 holes in a row signify a member of the population of African descent;
4 holes signify someone of European descent;
etc;
thus, counting how many holes in how many rows is a way of recording data about ethnic origins;
obviously, the computer itself contains consciousness neither of ethnicity statistics nor of anything else;
the only consciousness involved is in the people who first reduce information to(equivalents of) holes on a tape and secondly extract the information as computer output;
a conscious AI has to duplicate the activities of a brain, not of a computer.
Paul.
Kaor, Paul!
I am not disagreeing. I do agree a conscious AI, if that is possible, has to duplicate the activities of a human brain, not the computer. What I'm trying to suggest is that before any AI can exist, the process leading up to that has to somehow use as the advantages it can give: vast storing of data, rapid analysis thru brute force numbers crunching designed to eliminate the most unlikely possibilities to an AI. And I think any AI will continue to use computerized data storing and number crunching.
Ad astra! Sean
Post a Comment