Re: AI and survival instinct.

From: Nathan Russell (nrussell@acsu.buffalo.edu)
Date: Tue Apr 02 2002 - 22:18:33 MST


At 10:07 PM 4/1/2002 -0500, Eliezer S. Yudkowsky wrote:
>Ben Goertzel wrote:
> >
> > Once you've read his essay, then you can join the ongoing debate
> Eliezer and
> > I have been having on AI goal structures.... In brief: I think that
> > Friendliness has to be one among many goals; he thinks it should be
> wired in
> > as the ultimate supergoal in a would-be real AI.
>
>Sigh. Ben, I would never "wire" anyone or anything unless I was willing to
>wire myself the same way.

I must say, I don't think being 'wired' to always be friendly/kind to
humans is on the same level as a human choosing (for example) to be a
vegetarian (which I, personally, have done).

I don't even think being 'hardwired' to do ANYTHING can be compared to what
a human experiences. I've been hypnotized by a stage hypnotist who led me
to, among other things, be unable to separate my folded hands. I can still
remember the intense feeling of shock that I went through when I realized I
couldn't do so. I can't imagine spending my entire life aware that I could
not act to harm, or help other than maximally, a group of vastly inferior
beings (humans). I've encountered minds with 1/2 or even 1/3 of my IQ, and
found it very hard to relate to them; I find it difficult to believe that
an AI could be content being forced to work with and aid humans.

Nathan



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:38 MDT