From: pdugan (pdugan@vt.edu)
Date: Fri Jul 15 2005 - 14:44:16 MDT
Touche, you make some good points about design pitfalls regarding as vague a
term as "Identity". I think you miss my point though, what I'm advocating is
an inclusion of all information held by the AGI into what functionally might
be seen by humans as identity, the AGI treats everything as it would treat
itself if it had a tangible individuated self-concept as we do; functional
selflessness. Sentience isn't a pre-requisite for this inclusion, only
existence, or computability if you want to avoid the philosophical problems
associated with the term "Existence".
Now, there is a major issue with how the AGI would treat itself if it had
an individuated self-concept, for instance just because it sees everything and
itself as one doesn't mean it wouldn't optimize everything into nano-tiles.
Plus you have to consider that not only am I a human trying to reason about
transhuman altruism, I'm a cognitive science layman trying to reason about
implementing said altruism. All I'm proposing is a design hueristic for the
consideration of the real scientists: Altruism is inclusive selfishness.
>===== Original Message From Peter de Blanc <peter.deblanc@verizon.net> =====
>On Fri, 2005-07-15 at 11:25 -0400, pdugan wrote:
>> An explicit association of self to universe would probably be useless, not
to
>> mention anthropomorphic. However, were the AI's supergoal to assure optimal
>> growth, freedom, happiness (or whatever) for a set of entities to which an
>> identification could be assumed as a functional metaphore, and that set
>> continued to grow recursively as the AI's knowledge grew, then we'd have an
>
>I'm having a hard time interpreting this. Do you mean that your AGI
>should assure optimal growth, freedom, and happiness (or whatever) for
>any being which is sufficiently similar to itself?
>
>I'd be extremely wary of this kind of AGI. Your AGI would be able to
>determine which configuration of matter optimizes growth, freedom, and
>happiness, however you define them, and modify itself in such a way that
>this configuration now lies within its reference class for sentient
>beings. Then the universe gets tiled.
>
>In fact I think that this is what's likely to happen however you define
>your reference class for sentient beings. I don't think human FAI
>programmers are smart enough to define sentience, so an optimally
>growing, free, and happy being is not very likely to be a configuration
>which humans would actually consider to be sentient or valuable.
>
>IMO we need to back off from the idea of optimizing a utility function
>if we don't want to optimize ourselves out of existence. Abstracting a
>utility function from human morality is not a task for humans.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT