From: Ben Goertzel (ben@goertzel.org)
Date: Sat Nov 26 2005 - 09:24:39 MST
Eliezer,
> SIAI's notion of Friendliness assurance relies on being able to design
> an AI specifically for the sake of verifiability. Needless to say,
> humans are not so designed. Needless to say, it is not a trivial
> project to thus redesign a human. I cannot imagine going about it in
> such way as to preserve continuity of personal identity, the overall
> human cognitive architecture, or much of anything.
Hmmm... Regarding your latter sentence, I'm not sure why you feel this
way. But it's an interesting question, to which I don't have an
answer. If there is a detailed line of reasoning underlying your
statement I'd be curious to hear it.
The question you raise is is: given any two minds M and N, is it
possible to create a series of intermediate minds M1, M2, M3, ..., M_n
so that
* M_i is c-similar to M_(i+1)
* M_1 is c-similar to M
* M_n is c-similar to N
where
A is c-similar to B
means
"if A is transformed into B then there is a strong feeling of
continuity of consciousness through the transition."
As you very likely recall Greg Egan posited a series of minds like
this in his novel Diaspora, bridging human and alien minds; but he was
after "communication similarity" rather than "continuity of
consciousness similarity." That is, the minds in his series were
markedly separate beings without "continuity of consciousness" joining
them. His goal was just that M_n should be able to communicate
clearly with M_(n+1).
For M_n to be transformed into M_(n+1) while preserving the feeling of
"selfness" requires a much higher degree of similarity than between
the beings in Egan's series, of course.
I don't find it very clear at the moment whether or not it is always
possible to construct a series like this. This seems to me to depend
in a subtle way on the ways that the subjective experiences of
"selfness" and "consciousness" are constructed within (various sorts
of) minds. You seem to believe that the construction of such series
is not possible if the mind-types are too different, but I'm wondering
if you have any solid reasons, or even any clearly-articulable
intuitions, in support of this hypothesis.
Thanks,
Ben Goertzel
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:53 MDT