RE: singularity arrival estimate... idiocy... and human plankton

From: ben goertzel (ben@goertzel.org)
Date: Fri May 17 2002 - 14:26:39 MDT


***
> Compared to Eliezer, I do sort of doubt the long-term importance of
keeping
> humans around.

Humans? Or minds that started out as human?
***

I tend to doubt whether, in the long run, it will make any difference at
all whether a mind started out as human or not....

***
> Yeah, Friendly AI is *extremely* important to me, as a
> human being, in the "short run" (i.e. the next centuries or maybe
> milennia).

I've said it before, and I'll say it again: I don't think you can build a
Friendly AI if you conceive of this as exploiting the AI for your own
selfish purposes. I designed the Friendly AI semantics by thinking in
terms
of sharing altruism. I don't think you can create a workable method by
thinking in terms of brainwashing.
***

Well, the ideas of "death" and "sentient being" are human concepts whose
limitations will be very apparent to superhuman intelligences.

Of course, by talking about Friendliness we are talking about trying to
impose a value system defined in terms of human concepts, on transhuman
entities that may have entirely different concept systems and entirely
different "natural" value systems.

This is intrinsically "self-centered" in a broad sense, in that it assumes
our human concepts and values are very broadly applicable, when perhaps
they are not reasonably considered as such...

***
> But will there still be human plankton a few thousand years down the
line?

I would tend to hope not.

***

Maybe the only ones left will be a crew of six Plankton-Eliezers, locked in
a replica of DisneyLand full of 10000 android replicas of Jerry Falwell,
neurally modified by Uploaded-Transhuman-Ben to not desire uploading or
other technological improvements, and kept around as a tribute to
Human-Ben's long-disappeared perverse sense of humor...

ben



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:38 MDT