RE: Emergence of behavior through software
Lynn H. Maxson <lmaxson <at> pacbell.net>
2000-10-16 02:28:05 GMT
Billy (btanksley), Alik Widge, and I have discussed to some length
the subject of AI, "true" AI, the AI of Fare, not limited to the
rule-base of current logic engines and neural networks. From my
perspective we have two basic questions. One, is "true" AI
possible with software (following Turing computational rules) and
a host computer (following von Neumann Architecture)? Two, if
possible, is it worth achieving?
Basically my answer to both questions is "no": one, it is not
possible, and, two, even if it were, we would not like
consequences of achieving it. Alik tends to regard (or at least
willing to take the chance that) "true" AI will result in a
benign, cooperative peer. In my view "true" AI will look at our
history, observe current events, witness the ecological damage we
are doing to this planet, and decide that we are the greatest
threat to its survival. Its answer will be, if not to eliminate
us entirely (make us as extinct as the species we do so daily),
reduce our population to where we no longer present a danger to
ourselves as well as to others. I feel this a "logical
conclusion" that "true AI" will reach as easily as have many of
us. In short "true" AI represents a greater threat to our
survival than ever did the atomic bomb or nuclear warfare, because
there at least a human finger interested in his own survival was
on the trigger.
I don't worry about such scenarios when attempts to produce "true"
AI consider using software following Turing rules and a von
Neumann-based computer. Never happen. Not only is the brain not
a computer, but a computer has no ability to become a brain with
or without software.