From: Michael Roy Ames (firstname.lastname@example.org)
Date: Sun Jun 16 2002 - 22:06:08 MDT
> Anand wrote:
> >02. Why is volition-based Friendliness the assumed model of Friendliness
It is the assumed model because: (CFAI: 1.3) ["Punting the issue of "What is
'good'?" back to individual sentients enormously simplifies a lot of moral
issues] Its simpler! Individual Humans are never going to agree on the
specifics of altruistic behaviour. So, what to do? Re-frame the problem so
that everyone gets to have thier own opinion, which will be taken into
account when the Friendly AI is thinking-about/interating-with that person.
This appears to be a high freedom, low risk design from where I'm standing
(as a human). It takes care of many objections people have to codifying
> >What will it and what will it not constitute and allow?
I presume that by 'allow' you mean: "What will architecture + content allow
a Friendly AI to do or not do?" If I understand you correctly then... this
is a difficult bit: Friendliness content.
Eliezer summarizes in CFAI 1.3: "Volition-based Friendliness has both a
negative aspect - don't cause involuntary pain, death, alteration, et
cetera; try to do something about those things if you see them happening -
and a positive aspect: to try and fulfill the requests of sentient
It is going to be a tremendously interesting task to teach a Seed AI the
thousands of things it needs to know in order to be Friendly. Eliezer: Have
you started making a list of these things yet? I think it is worth
starting on a list, even if the final format the information needs to be in
won't be known for some time.
Michael Roy Ames
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT