RE: Singularity Institute: Likely to win the race to build GAI?

From: H C (lphege@hotmail.com)
Date: Wed Feb 15 2006 - 22:32:33 MST


I can't really answer any of your questions, but it seems pretty intuitive
to me.

Any case where you create some kind of "intelligent system" implemented as a
computer program, with the endowed property of writing computer code, it
just seems that by default you are going to go straight into a hard
take-off, and basically ripping the Universe a new one in a trivial amount
of time.

When you rip the Universe a new one, if you don't rip it *just right*, you
are probaby going to decimate any remnant of sentience, other than perhaps
your AGI.

I think some people on this list share the same intuitive grasp of the idea.
Maybe after I've educated myself a bit more (school is such a drag on
education), I can try to discuss this concept in more technical terms, to
actually see whether this intuitive idea can be made more sensible.

In order to do that, I think it is necessary to talk about just what an
"intelligent system" is, in general.

But I think that pretty much lands everything right where it is expected.

-hegem0n

>From: "Peter Voss" <peter@optimal.org>
>Reply-To: sl4@sl4.org
>To: <sl4@sl4.org>
>Subject: RE: Singularity Institute: Likely to win the race to build GAI?
>Date: Wed, 15 Feb 2006 14:00:08 -0800
>
>I agree with Ben, and would like to mention an even more fundamental issue:
>
>Over years of discussion if have also not heard convincing arguments that
>idea of a "verifiable FAI" makes sense, and/or is at all possible. For
>example, FAI inherently requires a dynamic goal structure (ie. not
>pre-defined) operating in unpredictable environment.
>
>I don't have time to enter into the debate (to busy actually building AGI),
>but just wanted to stress that not everyone here buys into some of these
>basic assumptions.
>
>Another assumption that I don't share is that "normal" AGI is likely to be
>"Unfriendly" - ie. detrimental to us.
>
>Peter Voss
>a2i2
>http://adaptiveai.com/
>
>
>-----Original Message-----
>From: owner-sl4@sl4.org [mailto:owner-sl4@sl4.org] On Behalf Of Ben
>Goertzel
>
>Eli> You can't "add Friendliness". It adds requirements like determinism
>and verifiability to the entire architecture. It rules out entire classes
>of popular AI techniques, like evolutionary programming,
>
>I have never seen you give a convincing argument in favor of this point,
>though I have heard you make it before.
>
>....
>



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:55 MDT