From: Eliezer S. Yudkowsky (email@example.com)
Date: Wed Dec 13 2000 - 12:22:39 MST
Durant Schoon wrote:
> Problem: A transhuman intelligence(*) will have a supergoal (or
> supergoals) and might very likely find it practical to
> issue sophisticated processes which solve subgoals.
> So the problem is this: what would stop subgoals from
> overthrowing supergoals. How might this happen? The subgoal
> might determine that to satisfy the supergoal, a coup is
> just the thing.
This is not the first time I've heard this possibility raised. My answer
is twofold: First, I've never heard a good explanation of why an
intelligent subprocess would decide to overthrow the superprocess.
Second, I've never heard a good explanation of why a transhuman would
decide to spawn intelligent subprocesses if it involved a major risk to
-- -- -- -- --
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jun 19 2013 - 04:00:32 MDT