From: Lee Corbin (lcorbin@rawbw.com)
Date: Sat Jun 28 2008 - 02:34:58 MDT
Tim writes
> By the way, if the AI has any long-term goals, then it will want to
> preserve its own integrity in order to preserve those goals. Although
> "preserve its own integrity" is a good enough example for the issue at
> hand, it's not something you'd really need to put in there explicitly.
Yes, I see that; an urge for survival could come as a consequence
of other things.
Is it predicated in these discussions that the AI is utterly
rational and will be much more successful than we humans
at rooting out internal contradictions?
I have in mind a hypothetical (but doubtless very real)
person I described the other day: this person has no
further wish to live, and would hardly put up any resistance
at all to a killer who came suddenly into his presence. Yet
at the same time that this person finds it too inconvenient
to actually take the steps to end his own life, he can be
momentarily amused by certain kinds of chat with his
friends or acquaintances, lives from moment to moment,
watches lots of TV, reads cheap novels, and probably
*would* take the steps to end his own life were his tax
accountant to perish and he had to go through the terrible
inconvenience of trying to find a new one or doing his
taxes himself.
Equally well, my calculator seems to display a tremendous
urge to finish any computation I key into it, but doesn't
seem to be the least bit reluctance toward being turned
off or even thrown away. Why do most people here
appear to never to entertain the idea that an AI might be
rather similar? In other words, does the rabid urge to
finish a certain command or kind of task need to be
impossible for a programmed entity that is also indifferent
to its fate? I know there is an inconsistency there, but we
are just one example of devices that live with inconsistency.
Lee
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT