From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sat Nov 26 2005 - 09:49:43 MST
Ben Goertzel wrote:
> Eliezer,
>
>>SIAI's notion of Friendliness assurance relies on being able to design
>>an AI specifically for the sake of verifiability. Needless to say,
>>humans are not so designed. Needless to say, it is not a trivial
>>project to thus redesign a human. I cannot imagine going about it in
>>such way as to preserve continuity of personal identity, the overall
>>human cognitive architecture, or much of anything.
>
> Hmmm... Regarding your latter sentence, I'm not sure why you feel this
> way. But it's an interesting question, to which I don't have an
> answer. If there is a detailed line of reasoning underlying your
> statement I'd be curious to hear it.
>
> The question you raise is is: given any two minds M and N, is it
> possible to create a series of intermediate minds M1, M2, M3, ..., M_n
Let me rephrase: I can imagine that over the next ten thousand
subjective years, it would be possible, desirable, and necessary that
you should, one change at a time, grow into a mind of which it was
possible to prove that future self-modifications obeyed some invariant
or other.
However, if you wanted to make the change to deterministic cognition in
one jump, today, I think it would probably kill you.
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:53 MDT