From: Phil Goetz (philgoetz@yahoo.com)
Date: Tue Aug 30 2005 - 18:17:38 MDT
--- Michael Vassar <michaelvassar@hotmail.com> wrote:
> I don't think so. The top level goals *can* write new top level
> goals (you
> really couldn't prevent an AI from doing this by denying itself
> access to
> its top level goals. If you tried it would just write a new AI with
> different goals and delete itself), but it will only do so if the
> expected
> utility of instituting a new goal as top level is greater than that
> of
> instituting it as a sub-goal *from the perspective of its current top
> level goal system*.
Yes, but that's where this conversation *began*. We're already
assuming that. The A B C -> X Y Z example shows how, one step at
a time, the system can take actions that provide greater utility
from the perspective of its top-level goals, that nonetheless end
up replacing all those top-level goals.
Another question entirely is whether, if the AI is told to maximize
a score relating to the attainment of its top-level goals, and is
given write access to those goals, it will rewrite those goals into
ones more easily attainable? (We could call this the "Buddhist AI",
perhaps?) The REAL top-level goal in that case
is "maximize a score defined by the contents of memory locations X",
but it doesn't help us to say that "maximization" won't be replaced.
The kinds of goals we don't want to be replaced have referents
in the real world.
> Seriously, it seems to
> me that you are saying that you can see a way in which its behavior
> could accidentally lead to low utility outcomes, yet if that
> is the case, why don't you expect it to see that same potential
> outcome and avoid it, at least once it is human equivalent.
You seem to be proposing that an AI will never make mistakes.
Making mistakes is a second way in which top-level goals can
drift away from where they started.
> At any given time, a FAI will be acting
> to maximize its utility function. It is possible that in some
> cases, changing supergoals would maximize its current utility
> function,
The supergoal IS the utility function. In the A, B, C example,
the utility function is a combination of A, B, and C.
- Phil
__________________________________________________
Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best spam protection around
http://mail.yahoo.com
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT