From: Eliezer S. Yudkowsky (email@example.com)
Date: Sat May 04 2002 - 13:21:56 MDT
Ben Goertzel wrote:
> A comment on the relation between A2I2 and Novamente.
> They are both founded on a "self-organizing network" design. They are both
> a bit more high-level and abstract than neural network models. They are
> both much simpler than Eliezer thinks an AGI should be, and they both rely
> too much on self-organization and emergence to fit neatly and naturally into
> Eliezer's "Friendly goal system" framework.
Actually, meaning no offense by it, neither system fits into a Friendly AI
framework because neither system (either in present implementation or
announced future plans) appears to be capable of representing the cognitive
constructs hypothesized in "Creating Friendly AI". Novamente has
Turing-complete structures that modify each other and therefore needs a
controlled ascent feature during that period in which a hard takeoff seems
pragmatically impossible but it is mathematically possible for you to be
wrong. Voss says that A2I2 doesn't have the structural capability for
strongly recursive self-improvement, and since Voss understands the basic
FOOM! dynamic of hard takeoffs, I'm willing to trust to Voss's sense of
moral responsibility on this.
-- -- -- -- --
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Fri May 24 2013 - 04:00:21 MDT