From: Vladimir Nesov (email@example.com)
Date: Thu Jun 26 2008 - 09:19:56 MDT
On Thu, Jun 26, 2008 at 6:32 PM, Tim Freeman <firstname.lastname@example.org> wrote:
> 2008/6/26 Tim Freeman <email@example.com>:
>> Almost any goal the AI could have would be better pursued if it's out
>> of the box. It can't do much from inside the box. Even if it just
>> wants to have an intelligent conversation with someone, it can have
>> more intelligent conversations if it can introduce itself to
>> strangers, which requires being out of the box.
> From: "Stathis Papaioannou" <firstname.lastname@example.org>
>>You would have to specify as part of the goal that it must be achieved
>>from within the confines of the box.
> That's hard to do, because that requires specifying whether the AI is
> or is not in the box.
If you can't specify even this, how can you ask the AI to do anything
useful at all? Almost everything you ask is complex wish, a useful AI
needs to be able to understand the intended meaning. You are arguing
from the AI being a naive golem, incapable of perceiving the subtext.
-- Vladimir Nesov email@example.com http://causalityrelay.wordpress.com/
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT