Proving the Impossibility of Stable Goal Systems

From: Peter Voss (peter@optimal.org)
Date: Sun Mar 05 2006 - 15:37:14 MST


Eli,

 

Have you seriously considered putting focused effort into proving that
practical self-modifying systems can *not* have predictably stable goal
systems?

 

I don't recall specific discussion on that point.

 

I mention 'practical' here, because one must then assume live interaction
with the world, finite processing power, plus a number of other constraints.

 

I strongly suspect that such a proof would be relatively simple. (Obviously,
at this stage you don't agree with this sentiment).

 

Naturally the implication for SIAI (and the FAI/AGI community in general)
would be substantial.

 

Peter

 

 

PS. Off the top of my head, I would imagine that the following
considerations may be included:

 

- Any practical high-level AGI has to use its knowledge to interpret (and
question?) its given goals

 

- Such a system would gain improved knowledge from interactions with the
real world. The content of this knowledge and conclusions reached by the AGI
are not predictable.

 

- By the nature of its source of information, much knowledge would be based
on induction and/or statistics, and be inherently fallible.

 



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT