Re: When Subgoals Attack

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Wed Dec 13 2000 - 12:22:39 MST


Durant Schoon wrote:
>
> Problem: A transhuman intelligence(*) will have a supergoal (or
> supergoals) and might very likely find it practical to
> issue sophisticated processes which solve subgoals.
>
> So the problem is this: what would stop subgoals from
> overthrowing supergoals. How might this happen? The subgoal
> might determine that to satisfy the supergoal, a coup is
> just the thing.

This is not the first time I've heard this possibility raised. My answer
is twofold: First, I've never heard a good explanation of why an
intelligent subprocess would decide to overthrow the superprocess.
Second, I've never heard a good explanation of why a transhuman would
decide to spawn intelligent subprocesses if it involved a major risk to
cognitive integrity.

-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT