What are "AGI-first'ers" expecting AGI will teach us about FAI?

From: Rolf Nelson (rolf.h.d.nelson@gmail.com)
Date: Sat Apr 12 2008 - 09:11:04 MDT


On Fri, Feb 29, 2008 at 6:02 PM, Ben Goertzel <ben@goertzel.org> wrote:
> The fact that AGi ethics is incredibly badly understood right now, and
> the only clear route to understand it better is to make more empirical
> progress toward AGI. I find it unlikely that dramatic advances in AGI
> ethical theory are going to be made in a vacuum, separate from
> coupled advances in AGI practice. I know some others disagree on
> this.

For any of the many people who agree with Ben's sentiment:

Large numbers of people have made various AI advances in the past. In
none of these cases, to my knowledge, have FAI people said, "a-ha,
that's one of the pieces of data I was waiting for, this advances FAI
theory." Why would we expect this to change in the future? At the very
least, doesn't this show that even if FAI advances require AGI
advances, the "bottleneck" is that there are too few people working on
deriving FAI from existing AGI, rather than too few people working on
existing AGI?

Are there specific facts about AGI that you're waiting to find out,
such that if the result of a pending experiment is A, then successful
FAI theory lies in one direction, but if the result is B, then
successful FAI theory lies in a different direction? If so, what are
such facts?

At what point will you know that AGI has advanced enough that FAI can proceed?

For SIAI specifically: how is OpenCog going to be "coupled" to
"dramatic advances in AGI ethical theory"?



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT