From: Michael Roy Ames (firstname.lastname@example.org)
Date: Sat Jun 12 2004 - 19:36:25 MDT
> The CV *is* doing something; the CV is *not* data
> used by FAI.
Then I misunderstood your concept of CV; thank you for clarifying.
> The FAI is an optimization process that defines itself
> as an approximation to collective volition.
> [...snip some interesting stuff...]
> Rather, the FAI views its own decision process as an
> approximation to what extrapolated humankind would decide.
There is no guarantee that an AI with this type of decision process would
epitomize what I consider human friendliness to be. You had "human
friendliness" as a first approximation of Friendliness in your former
writings, thus the word choice: Friendly. You seem to be redefining
Friendliness as an approximation of CV. You might as well rename it CVAI,
because its relationship with human friendliness has become tenuous.
You seem to be assuming that CV and friendliness are strongly positively
correlated, but present no basis/evidence for this assumption. If I
understand your assumption correctly, do you have any support for it? In my
experience people's decisions bear little correlation with friendliness, and
appear to be made from a mix of selfish and altruistic motivations. What
benefit would we derived from creating an CVAI rather than an FAI?
CV would appear to be a useful theory, and if realized as a *process* would
provide useful data on how humanity might make decisions, but it doesn't
sound in the least bit friendly. I suggest that if it did turn out to
approximate "human friendliness" better than a very friendly human would,
then it would be an accident. I do not expect humanity's collective
volition to be friendly. Do you expect it to be friendly? If so, why?
> *If the FAI works correctly*, then the existence of an FAI is transparent
Transparent in what sense?
Michael Roy Ames
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT