Re: friendly ai

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sun Jan 28 2001 - 10:33:30 MST


Ben Goertzel wrote:
>
> So, suppose that Friendliness to humans is one of the goals of an AI system,
> probabilistically weighted along with all the other goals.

"One of" the goals? Why does an AI need anything else? Friendliness
isn't just a goal that's tacked on as an afterthought; Friendliness is
*the* supergoal - or rather, all the probabilistic supergoals are
Friendship material - and everything else can be justified as a subgoal of
Friendliness.

> Then, my guess is that as AI's become more
> concerned with their own social networks
> and their goals of creating knowledge and learning new things, the weight of
> the Friendliness goal is going to
> gradually drift down.

Among the offspring and thus the net population weighting, or among the
original AIs? If among the original AIs, how does the percentage of time
spent influence the goal system? And why aren't the "goal of creating
knowledge" and the "goal of learning new things" subgoals of Friendliness?

-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT