From: Chris Capel (pdf23ds@gmail.com)
Date: Fri Jul 15 2005 - 15:44:15 MDT
Something I've not seen addressed, perhaps because I haven't been
looking, in all this speculation about the best supergoal for an seed
AI is whether we shouldn't really be worried about doing much more
than keeping the AI from failing after a takeoff. Here's my idea:
instead of trying to come up with a supergoal that will somehow do the
right thing in every circumstance, we just try to find the goal that
will allow the AI to become superintelligent without endangering
anyone, and that allows the AI some basic communication ability, and
then the AI can come up with a better goal system for an even better
AI.
I would guess that limiting the scope thusly wouldn't really buy us
much, if anything, in the way of simplifying the task of making a
Friendly seed AI. Would it really? Or is that approach already the one
being taken?
I think this means that domains (as an alternative to CV, if that
makes sense) are out of the scope of discussion--we don't want our
*first* iteration transhuman divvying up the universe first thing
after takeoff--but I think CV is probably worth thinking about,
because it might be directly crucial to the Friendliness of any AI
period. But perhaps it should be in a context where the AI is limited
to determining the collective volition of humans as concerning the
best behavior for the AI, and vis actions are explicitly limited to
interacting verbally with humans, or some other safeguard, until the
OK is given.
Of course, the discussion in the past few days has been regarding
worst case scenarios, and no matter how many safeguards we put into a
seed AI--no matter how limited we make its scope of action--the real
risks involved with takeoff are unchanged, and this discussion is
justified.
Chris Capel
-- "What is it like to be a bat? What is it like to bat a bee? What is it like to be a bee being batted? What is it like to be a batted bee?" -- The Mind's I (Hofstadter, Dennet)
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT