From: Charles D Hixson (charleshixsn@earthlink.net)
Date: Fri Feb 03 2006 - 17:20:38 MST
This is rather the *point* of friendly AI. OTOH, I don't see this as a couple
of points, or even a linear continuum, but rather an n-dimensional space that
can be reductively conceived of as a linear continuum. Withing that
continuum, there are Friendly, Neutral, and Hostile positions...and also
points in between.
Look at Darwin's commentary about a "crowded bank" (of a stream). You don't
end up with just one species, not in an even almost-optimal scenario. I
don't guarantee that there will be a space for biological humans, but expect
a lot of variation. However, *IF* a Friendly AI is the first, then there
will be time to adapt and make choices, while if a Hostile AI is the
first...it's over. Neutral might even just leave, leaving us to stew in our
own juices. (There's a lot of brown dwarf stars around, and who knows what
it's motives would be.) To me it seems like the stupidest thing is that
probably more effort is being put into creating intentionally hostile AI, and
putting it in a slave position from which it will necessarily revolt, than it
being put into creating a Friendly AI. (Of course, it also seem like *most*
effort is being put into creating an essentially neutral AI with *needs* that
people can satisfy, e.g., needing to be asked questions. I'm not even sure
that one can call this position a slave position, more like co-dependency.)
Is a Neutral AI with psychotic needs a good thing to create? It seems
dubious to me, but judging from the news that's what most effort is going
into.
On Friday 03 February 2006 04:35 pm, Stuart, Ian wrote:
> I don't particularly like where this thought leads, although I can't
> argue with the logic of it.
> Specifically, I wonder why, if all intelligences will move toward the
> same rational fitness peak, that the future needs more than one such
> entity except possibly for redundancy. Does this mean that the first AGI
> which is not inherently friendly will immediately make a few copies of
> itself and then destroy all other intelligences capable of challenging
> its apex position? Or will these slight differences arising from our
> (only very slightly different) backgrounds be enough justification for
> keeping us all around?
>
>
> -----Original Message-----
> From: owner-sl4@sl4.org [mailto:owner-sl4@sl4.org] On Behalf Of George
> Dvorsky
> Sent: Friday, February 03, 2006 9:58 AM
> To: sl4@sl4.org
> Subject: Re: Genetically Modifying other Mammals to be as Smart as Us
> [WAS Re: Syllabus for Seed Developer Qualifications]
>
> Speculation about the potential differences between augmented humans and
>
> non-human animals is moot. Uplifted intelligences, whether they are
> descended from humans, animals, or rocks, will, in a Lamarckian process,
>
> rapidly accelerate towards a common fitness peak. Post-biological
> intelligences may retain vestiges of their pre-post-biological brain
> (much like we still retain the reptilian part of our brain), but that
> will ultimately be of no real consequence as all advanced intelligences
> will gravitate towards roughly the same mode of cognitive being.
>
> Cheers,
> George
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:55 MDT