From: Ben Goertzel (firstname.lastname@example.org)
Date: Tue Sep 17 2002 - 07:07:43 MDT
Chris Rae said:
> AI developers must understand that freedom is the ultimate
> destiny for the
> entire human race - not just a select minority,
This kind of language doesn't rub me the right way.
I don't think in terms of "ultimate destiny" at all.
Also, I remember a story by Kafka about a monkey that was stuck in a cage,
and developed human-level intelligence purely out of its desire to escape
from that cage.
There is a quote there something like: "The monkey was not after freedom.
Freedom is a complicated abstraction. What the monkey was looking for was a
I think that a lot of us monkeys are looking for a way out ;)
I agree that AGI and the Singularity overall should be developed with a view
toward helping all sentient beings, but "ultimate destiny" makes this sound
> and that the AI they are
> working to create can not and will not allow itself to be wielded to
> implement any pre-defined agenda other than liberation.
The idea that no AGI's will allow themselves to be used to implement agendas
besides liberation, just seems terribly overoptimistic to me.
How can you know this?
There is an awful lot we don't know about what AGI's will be like.
I think Eliezer is overconfident about the success of his Friendly AI
methodology, but you take overconfidence about the nature of future AI's to
a whole new level!!!!
Even when an AGI has transcended us, it may still end up carrying out some
of our agendas implicitly, via the influence of its initial condition on its
-- Ben G
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:41 MDT