Re: friendly ai

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sun Jan 28 2001 - 11:43:23 MST


Ben Goertzel wrote:
>
> > "One of" the goals? Why does an AI need anything else? Friendliness
> > isn't just a goal that's tacked on as an afterthought; Friendliness is
> > *the* supergoal - or rather, all the probabilistic supergoals are
> > Friendship material - and everything else can be justified as a subgoal of
> > Friendliness.
>
> Creation of new knowledge, and discovery of new patterns in the world, are
> goals that I believe are innate to humans in addition to our survival and
> reproduction oriented goals.

Say *what*? Creation of new knowledge and discovery of new patterns are
*known* to be innate to humans - no need to "believe" it. Why? Because
they promote survival and reproduction. There's no brain-represented
connection from discovery to reproduction, AFAIK, but it's certainly a
historical fact about the origin of that joy.

> Should we not supply AI's with them too?
> Webmind is being supplied with these goals, because they give it an intrinsic
> incentive to grow smarter...

A transhuman general intelligence would reliably deduce such goals as a
declarative subgoal of Friendliness, selfishness, or pretty much anything
else. A prehuman AI can just be given the flat fact that curiosity is a
subgoal of Friendliness, or the flat knowledge that curiosity leads to
greater effectiveness at Friendliness - the two should be pretty much
equivalent if you did the goal system right - with the details of the
knowledge being learned over time.

First the Friendly AIs, *then* the boon drinking companions. Y'know, just
in case that "hard takeoff" thing turns out to be the correct scenario.

> > Among the offspring and thus the net population weighting, or among the
> > original AIs? If among the original AIs, how does the percentage of time
> > spent influence the goal system? And why aren't the "goal of creating
> > knowledge" and the "goal of learning new things" subgoals of Friendliness?
>
> They just aren't subgoals of "friendliness to humans" ... or of
> "Friendliness" under any definition of that term that seems natural to me ...

It seems *very* natural to me. *I'm* Friendly, or at least I try, and to
me there's a very obvious connection between creating knowledge / learning
new things and getting to the Singularity. I have the innate human joy in
it too, of course, but it's an innate joy that I cherish and cultivate
because it's important to my declarative goals.

-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT