From: Patrick McCuller (patrick@kia.net)
Date: Wed Sep 06 2000 - 12:20:46 MDT
> >
> > Thus the path of AI first is the path of least risk.
>
>
> This doesn't deal with the rest of my question. Since a Singularity
> class AI is utterly unpredictable, much more so than human beings and is
> much more powerful simply humans with things things like nanotech, then
> exactly why is the AI less dangerous? Your argument above seems to
> hinge on an assumption that the AI will be a Sysop that will rule over
> everything and somehow keep us from harm. That is a quite questionable
> assumption.
>
> - samantha
>
Remember that AI is also a tool. We'll be better equipped to know whether
to proceed when we have an AI approaching transcendence. Would you be
happier with this scenario:
1 A superhuman AI achieves transcendence.
or with this one:
1 A superhuman AI (but not a singularity) designs a means of uploading
humans.
2 One or more humans are uploaded.
3 An uploaded human achieves transcendence.
?
Patrick
PS A singularity is not truly unpredictable.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT