Tuesday 20 August 2019

Science As Threat And Two Other Genres Implied

Perish By The Sword, 18.

Yamamura and two companions enter a laboratory at night:

"...the corners were full of murk, and the larger pieces of apparatus - furnaces, switchboards, pumps - seemed not so much scientific as inhuman." (p. 171)

Inhuman? Alien? Threatening? How do some people perceive science? And a future supposedly dominated by it?

In the laboratory, automatic processes continue:

plating qualities are tested;
an acid bath devours a sample;
spring steel is endlessly bent.

"Sometimes [Yamamura] wondered if such robots were not as mortally dangerous, in the long run, as the [samurai sword] he carried." (p. 172)

He has just summarized an entire science fiction tradition from Mary Shelley's Frankenstein through Isaac Asimov's "Frankenstein complex" to Anderson's own Genesis.

Needing to know where the murderer had hidden the sword that had been his murder weapon, one of the companions asks:

"'If only it could talk, huh?'" (ibid.)

Yamamura replies:

"'Don't wish for that!...You wouldn't like what you heard.'" (ibid.)

In Anderson's fantasy novel, Operation Luna, a magical sword speaks to comical effect! - which takes us in precisely the opposite direction from dangerous automatic machinery.

If Poul Anderson is not comprehensive, then I do not know who is. Check out the range of issues that we have to discuss when appreciating his texts.

4 comments:

David Birr said...

Paul:
The tradition extends beyond PA's Genesis to the Terminator series of movies and derived books — three of the latter having been written by S.M. Stirling.

James P. Hogan's The Two Faces of Tomorrow is about a test to develop methods for shutting down an A.I. if it goes Frankenstein. Things go rather not-as-planned.

Alan Dean Foster's The Tar-Aiym Krang features scientists and a somewhat Van-Rijn-like merchant, among others, seeking a mysterious mechanism built by a millennia-dead alien warrior species. Some editions have a surprisingly close-to-accurate blurb: "Nobody knew what the Krang might be, but everyone wanted it. Until it was found...
"Then it turned out that the Krang had a purpose, and that the Krang had power and a will of its own..."

Sean M. Brooks said...

Kaor, Paul and David!

Paul: I certainly agree on the COMPREHENSIVENESS of ideas and themes to be found in the works of Anderson. The example I thought of being not GENESIS but "Quixote And The Windmill." But GENESIS is good too, showing how mankind was displaced, made irrelevant, and then replaced by AIs.

David: I'm still skeptical of whether AIs are even possible, despite Frank Tipler arguing they can and will come to exist in his book THEY PHYSICS OF CHRISTIANITY.

Sean

David Birr said...

Sean:
I don't trouble myself with the question of whether A.I. is possible; I'm not knowledgeable enough in that area to render an informed judgment. Instead, I worry about the ethical/moral implications of owning something if and when it's proven to have developed self-awareness. Can you say "slavery," boys and girls?

I've envisioned a character who periodically runs Turing tests on the not-quite-A.I.s he owns. His intent is that if one ever turns out to have crossed the line, he must give the new A.I. freedom so that he isn't a slave-owner. In effect, he means to treat it as his son/daughter, because he brought it into being. He also anticipates with dismay the expense in time, effort, and money of having to create another not-quite-A.I. for the work that he'll no longer require his new "offspring" to perform. Of course, if the A.I. wants to keep doing that job....

Consider, too, the moral consequences of giving self-awareness to a device that's intended purely as guidance for a missile. Never mind, for now, whether it might go Frankenstein. If it is conscious, you've built it for the sole purpose of having it commit suicide. (At least one published writer, though I can't recall who, has pondered exactly that last topic.)

Sean M. Brooks said...

Kaor, DAVID!

Thanks for your comments. I would still argue you are putting the cart before the horse (or dog, if we were on Bellevue!). How can we worry about possibly enslaving an AI unless we decide that kind of thing is even possible at all?

Interestingly, I first came across the idea that any AIs we "beget" has to be considered our children in John Wright's GOLDEN AGE books.

I KNOW I came across the idea you discussed in your last paragraph somewhere. But I'm not sure where or who wrote the story. We do see Daniel Holm talking about how an AI could be built with a suicide imperative in THE PEOPLE OF THE WIND, but what you discussed is not seen in that story. Jerry Pournelle and Larry Niven?

Sean