From: Gordon Worley (redbird@rbisland.cx)
Date: Wed Dec 13 2000 - 12:56:03 MST
At 2:22 PM -0500 12/13/2000, Eliezer S. Yudkowsky wrote:
>This is not the first time I've heard this possibility raised. My answer
>is twofold: First, I've never heard a good explanation of why an
>intelligent subprocess would decide to overthrow the superprocess.
Because the subprocess considers itself more important than the
superprocess. If people can feel more important than their bosses
and do things that will overthrow their bosses, intelligent subgoals
would have the same ability to overthrow supergoals with their
processes. Maybe this doesn't meat your qualifications for a 'good'
explanation, but that's the best I can think of right now.
>Second, I've never heard a good explanation of why a transhuman would
>decide to spawn intelligent subprocesses if it involved a major risk to
>cognitive integrity.
It is not the transhuman that is going to spawn intelligent
subprocesses, but the subgoals, meaning that the subgoals function
independent of the main intelligence and cannot be controled by it.
The problem could be headed off by not allowing subgoals to start
processes, but then the goals couldn't get much done and nothing
would happen unless the main intelligence let it, which would cause
the main intelligence to loose too much time to breathing, digestion,
moving, etc. (sort of like doing a chmod 677).
I should bring up again that I'm not really comfortable with the goal
hierarchy. I am understanding this idea and able to reason through
it, but still think that there must be some fundamental flaw in it.
I'm not sure why; it's just an irrational hunch (though it may be
rational to someone with a deeper understanding).
-- Gordon Worley http://www.rbisland.cx/ mailto:redbird@rbisland.cx PGP: C462 FA84 B811 3501 9010 20D2 6EF3 77F7 BBD3 B003
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT