From: Thomas Buckner (tcbevolver@yahoo.com)
Date: Thu Jun 03 2004 - 19:11:59 MDT
--- Ben Goertzel <ben@goertzel.org> wrote:
>
> > But, in any case, building a very clever system to
> reach
> > a goal (Friendliness) seems to me to be more in
> line with
> > what Eliezer is doing than building a generalized,
> > humanlike person. Since it seems easier to build
> that
> > than a humanlike person, it would be reasonable to
> worry
> > about the attractors that other projects might
> fall into.
>
> I'm not sure why you think it's easier to build this
> kind of
> single-goaled, super-powerful optimization process,
> than to build a
> human-level self-improving general intelligence.
>
> One important point is that we have an example of a
> human-level general
> intelligence -- billions of examples, in point of
> fact. But we have no
> examples of the kind of optimization process
> Eliezer's proposing now, so
> to construct one, we must proceed entirely based on
> theory and
> experimentation. And IMO current mathematical and
> computing theory does
> not bring us very far toward knowing how to create
> this kind of
> optimization process within reasonable computational
> space and time
> constraints.
>
> -- Ben G
>
Mein Gott, I think I finally begin to grasp Eliezer's
strategy a tiny bit. A non-conscious AI more in common
with a Chessmaster program than a human mind...? A
finely calibrated game-playing AI wherein the game
objective is to deduce and execute the
human-Friendliest strategy it can find, while
gathering as much hard data about the universe and
everything in it as possible, and updating its
strategy on the fly, without destroying or enslaving
the humans.
It will want to know everything...
Tom
=====
__________________________________
Do you Yahoo!?
Friends. Fun. Try the all-new Yahoo! Messenger.
http://messenger.yahoo.com/
This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:22:38 MST