From: Charles Hixson (charleshixsn@earthlink.net)
Date: Tue Nov 01 2011 - 13:41:26 MDT
On 11/01/2011 10:13 AM, Philip Goetz wrote:
> On Mon, Oct 31, 2011 at 3:52 PM,<natasha@natasha.cc> wrote:
>
>> Quoting Philip Goetz philgoetz@gmail.com:
>>
>>
>>>> From what I've overheard, one of the biggest difficulties with FAI is
>>>> that there are a wide variety of possible forms of AI, making it
>>>> difficult to determine what it would take to ensure Friendliness for
>>>> any potential AI design.
>>>>
>>> There are 4 chief difficulties with FAI; and the one that is
>>> most important and most difficult is the one that people
>>> in the SIAI say is not a problem.
>>>
>> It seems that friendly may not be, and probably is not, the
>> characteristic/behavior most consequential to AI agency and/or personhood.
>> For example, people are often friendly, but do not have conscience. An
>> empathic AI would be friendly if and when that behavior is warranted, but
>> not as a default (i.e. phony behavior or spurious characteristic).
>>
>> Natasha
>>
> The term "Friendly AI" is a bit of clever marketing. It's a technical
> term that has nothing to do with being friendly. It means a
> goal-driven agent architecture that provably optimizes for its goals
> and does not change its goals.
>
>
I don't have to use the term (Friendly AI) the way they do. I don't
use the term the way you say they do. I'll admit that I'm still
struggling to come up with a good definition of what I mean by the term,
particularly as when the AI is built it won't know what a person is.
But I still consider it a good goal.
-- Charles Hixson
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:05 MDT