Re: AGI Prototying Project

From: Peter de Blanc (peter.deblanc@verizon.net)
Date: Mon Feb 21 2005 - 08:04:25 MST


> * AGI should be forced, initially, to reproduce rather than self modify
> (don't shoot me for this opinion, please just argue okay?)

What do you mean by reproduce? If you mean creating a perfect clone, then
that's pointless; if you mean random mutation and crossover, then that's
unpredictable and could do bad things to the AGI's goal system, so the AGI
might not want to reproduce (of course, this would select for AGIs which do
want to reproduce); if you mean the AGI must build a new AGI to succeed it,
then that's the same thing as self-modification.

> * AGI will trigger a greap leap forward, and humans will become
> redundant. Intelligence is never the servant of goals, it is the master.

Without an existing supergoal, by what measure do you compare potential
goals, and how is this measure different from a supergoal?

> * In AGI, psychological instability will be the biggest problem, because
> it is a contradiction to say that any system can be complex enough to
> know itself.

To know oneself, it is not necessary to contain oneself as a proper subset;
it is enough to have a map of the high-level organization and be able to
focus attention on one detail at a time.



This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:22:53 MST