From: Chris Cooper (email@example.com)
Date: Thu Apr 05 2001 - 00:24:26 MDT
>> If so it would dump this paradoxical meaningless chore and go
>> find something better (at least actually possible) to do.
>Not necessarily. Ve might just fulfill whatever of the chore can be
fulfilled. "Something better" under what criterion? <
I can think of several things that might seem better to a SI than playing God
to a bunch of deluded hairless apes. Perhaps spend a few thousand years
cruising around the galaxy, just to see what is really out there, to give one
big possibility. Curiosity is certainly a powerful motivator to human-level
intelligence, its hold will be even stronger as intelligence increases.
Another example of something better might be the temptation to find a lonely
planet somewhere, and start your own race of humanoids, to see if the
experiment comes to a better end this time. Perhaps this selfish behavior is
outside the programming of a Friendly AI, but I'm still not so sure that such
programming will survive the transition to SI status. Perhaps the SI will
change vis definition of friendliness, at which point I doubt that we will
have the intellectual horsepower to argue the point.
This goes back to my earlier concern. I'm still not convinced that
Friendliness guarantees that a SI is going to let us join in on all the fun.
I'm sorry to be so pessimistic on this point, but it keeps bobbing back up
to the surface. It is so easy to see how humans screw up so many other
wonderful things in this world, I find it difficult to see past the
possibility of screwing this up, as well.
All that being said, I hope I'm wrong, and this time it all works out. But I
think it is always important to keep the possibility of failure in mind, if
only to avoid failure in deed.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT