Re: drives ABC > XYZ

From: Tennessee Leeuwenburg (
Date: Tue Aug 30 2005 - 20:00:09 MDT

Michael Wilson wrote:

>--- Phil Goetz <> wrote:
>Michael Vassar wrote:
>>Yes, but that's where this conversation *began*. We're already
>>assuming that. The A B C -> X Y Z example shows how, one step at
>>a time, the system can take actions that provide greater utility
>>from the perspective of its top-level goals, that nonetheless end
>>up replacing all those top-level goals.
>I /think/ Goetz's point is that in practice the AI could be unable to
>predict in detail what the results of a self-modification could be,
>yet still decide that the predicted benefits are worth the risk of
>an undesireable future version of itself existing. An omniscient
>AI would never suffer from this problem, but it's possible in
>principle to design sufficiently bizarre initial goal systems plus
>environmental conditions that would realistic any realistic AI to
>violate optimisation target continuity. I have no idea why anyone
>would actually do this in practice, except maybe as a controlled
>experiment carried out after we've finnished the more pressing
>task of eliminating all the looming existential risks.
Seems remarkably similar to the risks being undertaken by primitive
human, unable to predict in detail what the results of creating AI could
be, yet still deciding the risks are worth it...

I think it is a valid theoretic concern, but (a) there also risks in not
taking such actions, and (b) you've gotta take a few risks now and then. ;)


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT