From: Michael Wilson (firstname.lastname@example.org)
Date: Thu Jul 14 2005 - 10:43:48 MDT
Robin Lee Powell wrote:
> Looking around me, I see substantial evidence that if there is
> *any* universal morality, it is only "nature red in tooth and claw"
> natural selection.
> If so, and you are right, it would be impossible to build *Friendly*
I agree. In an AI that isn't causally clean in design and execution,
the emergence of an iterative selective dynamic is nearly inevitable,
and preventing those dynamics from becoming open ended and creating
arbitrary goal system stomps (usually either crippling or expansionist)
is extremely difficult. Attempting to make such an AGI design safe, or
even to control it to useful ends, is an attempt to plug a crumbling
dam that is constantly springing leaks while blindfolded. The problem
is that the primary goal system will create secondary goal systems,
often nonobvious ones, that are not strongly constrained to respect
their parent goals. It is somewhat comparable to biological cancer in
fact, though much more dangerous.
I suspect it is beyond human design capability to design a safe AGI
that is not causally clean, and even if it isn't I'd think that the
supporting research to do so would indicate that it's a bad idea.
This is why the SIAI has been focusing on designs that have no
potential for goal system fracture; it's incredibly hard to design
safe unified goal systems and verifiable, tractable evaluators for
them, never mind distributed/stochastic/messy ones (and yes, I say
this as a reformed former advocate of genetic algorithms, agents
and activation propagation).
Geddes is merely engaging in wishful thinking, albeit on a truly
impressive existential scale.
* Michael Wilson
How much free photo storage do you get? Store your holiday
snaps for FREE with Yahoo! Photos http://uk.photos.yahoo.com
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT