RE: Friendly Existential Wager

From: Ben Goertzel (ben@goertzel.org)
Date: Sat Jun 29 2002 - 07:24:58 MDT


> As emergence is what our intelligence grew out of in large part
> to start with I don't find that setting up that which
> intelligence may likely emerge from is so unreasonable.
> Although that doesn't seem a reasonable characterization of what
> Ben is doing either.
>
> - samantha

No, mine is not an artificial-life-type approach... which would be
interesting but in my view would probably take a very long time to run and
require immensities of computer power.

My design is based on setting up certain knowledge representation structures
(basically, a certain type of weighted, labeled hypergraph) and certain
learning algorithms working on these structures, and then teaching the
system in such a way that the hypergraph will self-organize into the
"emergent structures & dynamics of mind." It is tricky to figure out what
structures/dynamics to build in and which one to coax into emergence;
clearly, you need to build in *something* but not *everything*, and we have
some pretty specific ideas about what to build in and how to coax emergence
of other stuff...

Eliezer's intuitions about what to build in vs. what to coax into emergence,
in AGI design, are different from mine...

ben g



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT