From: Philip Goetz (firstname.lastname@example.org)
Date: Tue Nov 01 2011 - 11:13:47 MDT
On Mon, Oct 31, 2011 at 3:52 PM, <email@example.com> wrote:
> Quoting Philip Goetz firstname.lastname@example.org:
>>> From what I've overheard, one of the biggest difficulties with FAI is
>>> that there are a wide variety of possible forms of AI, making it
>>> difficult to determine what it would take to ensure Friendliness for
>>> any potential AI design.
>> There are 4 chief difficulties with FAI; and the one that is
>> most important and most difficult is the one that people
>> in the SIAI say is not a problem.
> It seems that friendly may not be, and probably is not, the
> characteristic/behavior most consequential to AI agency and/or personhood.
> For example, people are often friendly, but do not have conscience. An
> empathic AI would be friendly if and when that behavior is warranted, but
> not as a default (i.e. phony behavior or spurious characteristic).
The term "Friendly AI" is a bit of clever marketing. It's a technical
term that has nothing to do with being friendly. It means a
goal-driven agent architecture that provably optimizes for its goals
and does not change its goals.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:05 MDT