From: Tennessee Leeuwenburg (tennessee@tennessee.id.au)
Date: Tue Aug 29 2006 - 18:01:01 MDT
John K Clark wrote:
> "Ricardo Barreira" <rbarreira@gmail.com>
>
>> How do you even know the AI will want any control at all?
>
> If the AI exists it must prefer existence to non existence, and after
> that
> it is a short step, a very short step, to what Nietzsche called "the
> will to
> power".
>
>> Tennessee's point is that a powerful AI doesn't strictly imply a
>> singularity.
>
> Yes that was his point, a point I believe is ridiculous.
I don't see why. Do you think the existence of humans strictly implies a
singularity? The most intelligent humans are more than "twice" as
intelligent as the average (using crude test measures), yet they haven't
sparked off a singularity or gone off to make eugenic love to eachother.
How intelligent does an intelligence need to be before a singularity is
implied?
>
>> I challenge you to prove otherwise
>
> Prove? This isn't high school geometry, I can't prove anything about a
> intelligence far far greater than my own; about the only thing I can say
> about it is that it's a good bet it won't act like a fool. Eliezer thinks
> this mega genius will behave like a jackass and place our well being
> above
> it's own. I think that is unlikely.
I have advanced that position before, it wasn't received well. It
appears that SL4 seems to regard foolishness and intelligence to be
unrelated, or at least not necessarily related.
Cheers,
-T
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:57 MDT