From: Ben Goertzel (ben@webmind.com)
Date: Sun Jan 28 2001 - 11:09:43 MST
> Ben Goertzel wrote:
> >
> > So, suppose that Friendliness to humans is one of the goals of
> an AI system,
> > probabilistically weighted along with all the other goals.
>
> "One of" the goals? Why does an AI need anything else? Friendliness
> isn't just a goal that's tacked on as an afterthought; Friendliness is
> *the* supergoal - or rather, all the probabilistic supergoals are
> Friendship material - and everything else can be justified as a subgoal of
> Friendliness.
Creation of new knowledge, and discovery of new patterns in the world, are
goals
that I believe are innate to humans in addition to our survival and
reproduction
oriented goals. Should we not supply AI's with them too? Webmind is being
supplied
with these goals, because they give it an intrinsic incentive to grow
smarter...
>
> > Then, my guess is that as AI's become more
> > concerned with their own social networks
> > and their goals of creating knowledge and learning new things,
> the weight of
> > the Friendliness goal is going to
> > gradually drift down.
>
> Among the offspring and thus the net population weighting, or among the
> original AIs? If among the original AIs, how does the percentage of time
> spent influence the goal system? And why aren't the "goal of creating
> knowledge" and the "goal of learning new things" subgoals of Friendliness?
>
They just aren't subgoals of "friendliness to humans" ... or of
"Friendliness"
under any definition of that term that seems natural to me ...
ben
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT