Re: ITSSIM (was Some new ideas on Friendly AI)

From: Phil Goetz (philgoetz@yahoo.com)
Date: Wed Feb 23 2005 - 10:03:42 MST


--- Tennessee Leeuwenburg <tennessee@tennessee.id.au>
wrote:

> Ben,
>
> I read your idea on actions based on proven
> theories, subject to a
> blacklist against which unsafe options are blocked.
> You seemed to be
> worried that such a system would be susceptible to
> improve itself to a
> local minima - that is, it would be hill-climbing to
> a potentially
> limiting endpoint.
>
> ...
>
> Let Mag(OPTIONS) represent the number of elements
> contained in the set
> OPTIONS.
>
> Mag(OPTIONS) is proportional to IQ(AGI).
>
> Here's my argument :
>
> X = Max(OPTIONS)
> If X != now & V(X) > V(now) then IQ(AGI,X) >
> IQ(AGI,now) which implies
> Mag(OPTIONS,AGI, X) > Mag(OPTIONS, AGI, now).

This doesn't show that Mag(OPTIONS) is proportional to
IQ(AGI).
>
> Corrolary : Regardless of the speed at which the
> intelligence of the
> AGI grows, the options available to the AGI increase
> with the
> intelligence of AGI. Incremental increases to IQ do
> not result in
> local minima, because the horizon of the AGI is
> pushed wider with each
> improvement.

Insert here what Ben said again.

- Phil

                
__________________________________
Do you Yahoo!?
Read only the mail you want - Yahoo! Mail SpamGuard.
http://promotions.yahoo.com/new_mail



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:50 MDT