Re: Safety of brain-like AGIs

From: Ben Goertzel (ben@goertzel.org)
Date: Fri Mar 02 2007 - 06:45:13 MST


Shane Legg wrote:
> On 3/2/07, *Ben Goertzel* <ben@goertzel.org <mailto:ben@goertzel.org>>
> wrote:
>
> Well, Shane, this list has a diverse membership, including some of
> us who
> are working on concrete AGI projects ;-)
>
>
> A lack of concrete AGI projects is not what worries me, it's the lack of
> concrete plans on how to keep these safe that worries me.
Shane, do you have any concrete plans in this regard?

I divide AGI safety issues into two categories:

1) AGI-internal issues [the system rewriting its goal system in a nasty
way, etc.]

2) societal and pragmatic issues [bad guys stealing your code, the gov't
outlawing your AI, AGI causing the downfall of society and mass
hari-kiri, etc.]

I believe that within the Novamente project we have crafted an effective
strategy to cope with the societal and pragmatic issues, though I can't
discuss that publicly in detail because this would diminish the
effectively of the strategy.

Regarding 1, we seem to have a solid grasp on the issue intuitively in
the context of the Novamente design, but not a rigorous proof. We also
have ways to test our grasp of the issue via experiments with
toddler-level AGIs. But this stuff can't really be discussed except in
the context of the non-public details of the NM design.

Basically, the aspects of NM I discuss on email lists are ones I am
comfortable sharing with outsiders, and plans for maximizing safety are
not really in that category, either for pragmatic reasons or
proprietary-technology reasons.

Perhaps you are in the same position? Do you have detailed ideas about
AGI safety in the context of your own project, which you are not willing
to share with this list for similar reasons?

-- Ben G

>
> A massive legion is being assembled at the gate, and the best response
> we have come up with is an all-star debate team.
>
> Shane



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:57 MDT