From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Thu Apr 07 2005 - 22:19:30 MDT
Samantha Atkins wrote:
> Really? Michael's message didn't look much like that was what was
> being said to me.
That's how I read Vassar's message, but maybe only because it was what I
expected to hear. Nonetheless, I didn't see anything about an OP not thinking
before it acts. Rather, what I heard was that an OP with the goal of thinking
will act in order to think more efficiently. These will be aFriendly acts;
humane humans are not driven solely by the goal of making accurate predictions.
> On Apr 7, 2005 6:19 PM, Peter de Blanc <peter.deblanc@verizon.net> wrote:
>
>>On Thu, 2005-04-07 at 17:28 -0700, Samantha Atkins wrote:
>>
>>>So are you saying that the VPOP cannot dependably be expected to think
>>>before acting as it simply has no such distinction? If true this
>>>would be a strong reason not to build such a thing.
>>>
>>The point is that you shouldn't make an RPOP with the supergoal of
>>forming predictions and then try to enslave it to some external decision
>>module.
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT