Re: friendly ai

From: Samantha Atkins (samantha@objectent.com)
Date: Mon Jan 29 2001 - 03:05:54 MST


"Eliezer S. Yudkowsky" wrote:
>
> Ben Goertzel wrote:
> >
> > So, suppose that Friendliness to humans is one of the goals of an AI system,
> > probabilistically weighted along with all the other goals.
>
> "One of" the goals? Why does an AI need anything else? Friendliness
> isn't just a goal that's tacked on as an afterthought; Friendliness is
> *the* supergoal - or rather, all the probabilistic supergoals are
> Friendship material - and everything else can be justified as a subgoal of
> Friendliness.
>

It seems like a pretty limited SI if its only goal is Friendliness.
Some other goals like to explore the universe or expand its
understanding and create interesting things would seem to be almost a
requirement of a real intelligence. For that matter I would expect a
real SI to have the ability to formulate its own goals including
questioning deeply even the pre-defined supergoal. And Friendliness to
what? Only to humans? What about other sentiences when/if it
encounters them?

> > Then, my guess is that as AI's become more
> > concerned with their own social networks
> > and their goals of creating knowledge and learning new things, the weight of
> > the Friendliness goal is going to
> > gradually drift down.
>
> Among the offspring and thus the net population weighting, or among the
> original AIs? If among the original AIs, how does the percentage of time
> spent influence the goal system? And why aren't the "goal of creating
> knowledge" and the "goal of learning new things" subgoals of Friendliness?
>

Why should they be? I thought you had decided quite some time ago that
hardwired goals in an SI simply will not hold.

- samantha



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT