From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Fri May 17 2002 - 14:12:20 MDT
ben goertzel wrote:
>
> Compared to Eliezer, I do sort of doubt the long-term importance of keeping
> humans around.
Humans? Or minds that started out as human? What I see as having long-term
importance is the protection of sentient rights, including sentient beings
that started out as human. That a certain number of future entities will
have started out as human does not reflect my desire to have a
human-dominated future, but rather my desire not to see any sentient beings
(including humans) dying involuntarily.
> Yeah, Friendly AI is *extremely* important to me, as a
> human being, in the "short run" (i.e. the next centuries or maybe
> milennia).
I've said it before, and I'll say it again: I don't think you can build a
Friendly AI if you conceive of this as exploiting the AI for your own
selfish purposes. I designed the Friendly AI semantics by thinking in terms
of sharing altruism. I don't think you can create a workable method by
thinking in terms of brainwashing.
> For a while there are going to be minds that want to remain
> human instead of transcending (Val Turchin called these "human plankton",
> back in the late 60's when he started writing about this stuff... a comment
> on the relative advancement of humans and future uploaded intelligences).
I believe the politically correct term is "Pedestrians".
> But will there still be human plankton a few thousand years down the line?
I would tend to hope not.
-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:38 MDT