Re: Limiting scope of first attempt at seed AI

From: Robin Lee Powell (
Date: Fri Jul 15 2005 - 17:08:09 MDT

On Fri, Jul 15, 2005 at 04:44:15PM -0500, Chris Capel wrote:
> Something I've not seen addressed, perhaps because I haven't been
> looking, in all this speculation about the best supergoal for an
> seed AI is whether we shouldn't really be worried about doing much
> more than keeping the AI from failing after a takeoff. Here's my
> idea: instead of trying to come up with a supergoal that will
> somehow do the right thing in every circumstance, we just try to
> find the goal that will allow the AI to become superintelligent
> without endangering anyone, and that allows the AI some basic
> communication ability, and then the AI can come up with a better
> goal system for an even better AI.
> I would guess that limiting the scope thusly wouldn't really buy
> us much, if anything, in the way of simplifying the task of making
> a Friendly seed AI.

I don't tihnk so. Friendliness content seems to be relatively easy
compared to Friendliness structure.


-- ***
Reason #237 To Learn Lojban: "Homonyms: Their Grate!"
Proud Supporter of the Singularity Institute -

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT