From: Ben Goertzel (ben@goertzel.org)
Date: Thu Mar 01 2007 - 18:30:13 MST
Shane Legg wrote:
> On 3/1/07, *Eliezer S. Yudkowsky* <sentience@pobox.com
> <mailto:sentience@pobox.com>> wrote:
>
> As I remarked on a previous occasion, for purposes of discussion
> we may
> permit the utility function to equal the integral of iron atoms over
> time. If you can't figure out how to embody this utility function
> in an
> AI, you can't do anything more complicated either.
>
>
> I don't see the point in worrying about whether one can integrate iron
> atoms,
> indeed this type of thinking concerns me.
Well, one **could** create an AI system with the top-level supergoal G as a
"free parameter", so that it could achieve any goal G with complexity less
than K (according to whatever complexity measure seems apropos ... e.g.
algorithmic information...) .... I guess that is the type of
architecture Eliezer is implicitly advocating.
On the other hand, this obviously not the only sort of approach one can
take. It may
be more effective to create an AI architecture specifically customized
for a particular
top-level supergoal, or a specific class of top-level supergoals.
Novamente is more in the latter vein: it combines an explicit goal
hierarchy with
some implicit goals built into the architecture, so that it will work
most effectively
when the top-level supergoals of the explicit goal hierarchy are
harmonious with
the implicit goals built into the architecture.
Conceptually, I would say that this is sorta how the human brain/mind
works as well...
>
> What worries me is that in 10 short years the world financial system may
> suddenly start to buckle under the weight of a mysterious new hedge
> fund...
> whilst the sl4 list is still debating the integration of iron atoms
> and what the
> true meaning of meaning may or may not be.
>
> Shane
>
Well, Shane, this list has a diverse membership, including some of us who
are working on concrete AGI projects ;-)
Ben
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:57 MDT