RE: Shock Level 5 (SL5) - 'The Theory Of Everything'

From: Ben Goertzel (ben@goertzel.org)
Date: Thu Aug 18 2005 - 04:52:42 MDT


> > So, Godelian arguments are obviously no use for
> > FAI...
> >
> > -- Ben G
> >
> >
> >
>
> I disagree Ben. I think Godel is the key to this
> whole thing! As you know, I'm desperate to try to
> prove Universal Morality and I've been lunging and
> stabbing wildly at anything that might save the day
> for altruism. Godel is my last straw! ;)
>
> For how can an AI self-improve unless it can
> *understand it's own system* ?

But self-understanding, for this purpose, doesn't need
to be complete and absolute...

> An AI which did not understand it's own system could
> not be sure that any modification it made to itself
> would be an improvement.

But even without complete self-understanding, an AI
could prove that a certain self-modification would,
under certain environmental conditions, be an
improvement with probability > , say, 99.99%

> Perhaps only a friendly AI could have complete
> understanding of its own system?

As you yourself note, this seems to be an example of
non-rationally-justifiable "grasping at straws" ;-)

-- Ben G



This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:23:01 MST