RE: Intelligence is exploitative (RE: Zen singularity)

From: Ben Goertzel (ben@goertzel.org)
Date: Wed Feb 25 2004 - 16:36:02 MST


> The assumption that any superintelligence will simply avoid taking
> selfish actions is indeed, as you suggest, a LARGE stretch of the
> imagination. Ensuring a positive outcome would involve explicit
> engineering to exclude an implicit survival instinct. Most
> architectures utilizing mutually independent goals seem to leave this
> possibility wide open.
>
> Implementing a singly-rooted goal architecture in an engineered
> superintelligence would appear to close a lot of these gaps, by
> requiring all actions to ultimately serve a single goal (perhaps
> Friendliness, but it is arbitrary for this discussion). A survival
> sub-goal would inherit it's utility from this supergoal.

Implementing a singly-rooted goal architecture in an engineered
superintelligence closes *NO* gaps, if the superintelligence is radically
self-modifying -- unless it is the case that this goal architecture is going
to survive the process of repeated self-modification!

My own feeling is that singly-rooted goal architectures are by nature
brittle and unlikely to survive repeated self-modification.

However, I realize that Eliezer and others have different intuitions on this
point.

Experimentation and mathematics will, in future, give us greater insight on
this point ... assuming we don't annihilate ourselves or bomb ourselves back
into the Stone Age first ;-)

-- Ben G



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:46 MDT