Re: SIAI & Kurweil's Singularity

From: Samantha Atkins (sjatkins@gmail.com)
Date: Thu Dec 15 2005 - 19:18:49 MST


It is an interesting question whetheter an SAI can be trusted more or less
than a radically augmented human being. To date the more intelligent and
otherwise more capable instance of human being are not particularly more
trustworthy than other humans

On 12/15/05, 1Arcturus <arcturus12453@yahoo.com> wrote:
>
> I had another question about SIAI in relation to Kurzweil's latest book
> Singularity is Near.
>
> If I have him right, Kurzweil predicts that humans will gradually merge
> with their technology - the technology becoming more humanlike, more
> biological-compatible, and integrating into the human body and mental
> processes, until eventually the purely 'biological' portion becomes less and
> less predominant or disappears entirely.
>
> SIAI seems to presuppose a very different scenario - that strongly
> superintelligent AI will arise first in pure machines, and never
> (apparently) in humans. There seems to be no indication of 'merger', more
> like a kind of AI-rule over mostly unmodified humans.
>
> Some of this difference may be because Kurzweil predicts nanotechnology in
> the human body (including the brain) and very advanced human-machine
> interfaces will arise before strongly superintelligent AI, and that strong!
> ly superintelligent AI will require the completion of the
> reverse-engineering of the human brain. (Completed reverse-engineering of
> the brain + adequate brain scanning surely = ability to upload part or all
> of human selves?)
>
> But SIAI seems to assume AIs will become strongly superintelligent by
> their own design, arising from human designs, before humans ever finish
> reverse-engineering the human brain. The lack of a fully functional
> interface with the strongly intelligent AIs would cause humans to be
> dependent on the AIs to do the thinking from then on, and the AIs would take
> on the responsibility for the thinking of course also. This seems to assume
> the AIs would not be able to, or not want to, create interfaces or upload
> the humans -- that is, it would not 'uplift' the humans to its own level of
> intelligence so that they could then understand each other.
>
> I am trying to understand SIAI's position, or at least the emphasis of
> posters here and some representatives I have heard, contrasted with
> Kuzweil's book. There seems to be a contrast to me, although I know Kurzweil
> is involved with SIAI also.
>
> One thing I would say - the prediction I attribute to Kurzweil eliminates
> many of the very troubling problems that seem to arise in what I think is
> the SIAI scenario: how to trust an AI? How to design it to be at least as
> kind (at least to us) as we are [my comment: not a very high standard :)]
> How to understand the AI and its actions after it becomes strongly
> superintelligent? Whether or not to follow the AI's advice when it sounds
> wrong?
>
> None of these things are problematic if humans merge with technology and
> acquire its capacity for strong superintelligence. That is, humans would be
> at the very center of the Singularity and direct its development, for better
> or worse, with 'open eyes', and taking responsibility themselves r! ather
> than lending it to an external machine.
>
> gej
>
> ------------------------------
> Yahoo! Shopping
> Find Great Deals on Holiday Gifts at Yahoo! Shopping<http://us.rd.yahoo.com/mail_us/footer/shopping/*http://shopping.yahoo.com/;_ylc=X3oDMTE2bzVzaHJtBF9TAzk1OTQ5NjM2BHNlYwNtYWlsdGFnBHNsawNob2xpZGF5LTA1+%0A>
>
>



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:54 MDT