From: Perry E. Metzger (perry@piermont.com)
Date: Wed Dec 31 2003 - 12:21:45 MST
Tommy McCabe <rocketjet314@yahoo.com> writes:
> True, but no disproof exists.
Operating on the assumption that that something which may or may not
be possible will happen seems imprudent.
> If anyone thinks they
> have one, I would be very interested. And there's
> currently no good reason I can see why Friendly AI
> shouldn't be possible.
I can -- or at least, why it wouldn't be stable. There are several
problems here, including the fact that there is no absolute morality (and
thus no way to universally determine "the good"), that it is not
obvious that one could construct something far more intelligent than
yourself and still manage to constrain its behavior effectively, that
it is not clear that a construct like this would be able to battle it
out effectively against other constructs from societies that do not
construct Friendly AIs (or indeed that the winner in the universe
won't be the societies that produce the meanest, baddest-assed
intelligences rather than the friendliest -- see evolution on earth),
etc.
Anyway, I find it interesting to speculate on possible constructs like
The Friendly AI, but not safe to assume that they're going to be in
one's future. The prudent transhumanist considers survival in wide
variety of scenarios.
Perry
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:43 MDT