Re: Summary of current FAI thought

From: Samantha Atkins (samantha@objectent.com)
Date: Sun Jun 06 2004 - 02:06:45 MDT


On Jun 5, 2004, at 1:46 AM, Eliezer Yudkowsky wrote:

> Samantha Atkins wrote:
>> Presumably it only has current knowledge of the subjects involved and
>> this without full sentient being referents. The knowledge to date
>> and any likely extrapolation of such knowledge in say the next decade
>> is likely to not be up to the task. Extrapolations from the FRPOP
>> without it understanding anything about qualia or knowing sentience
>> itself are likely to be extremely dangerous to human well-being.
>
> I did not say it would not understand "qualia". The FRPOP has to
> understand "qualia" so as to create zombies instead of real people
> when it simulates humans... or something. As I said, I confess I'm
> not entirely certain how this should work, but I still don't think
> that the characters in "One Over Zero" are genuine people, even though
> they ponder their own existences. It's just Tailsteak imagining them
> pondering their own existences.
>
> We're not talking about a limitation on power but a difference in
> architecture.

How will it understand qualia? Again, if it has only as much knowledge
about human cognition, neuroscience and so on as we have plus
extrapolations (presumably it can't run its own actual experiments)
then how is it possible that it will understand qualia sufficiently or
sentience sufficiently to fully model it for its extrapolation of
collective volition? What am I missing that somehow makes this
possible?

>
>>> That's phrased too negatively; the volition could render first aid,
>>> not only protect - do whatever the coherent wishes said was worth
>>> doing, bearing in mind that the coherent wish may be to limit the
>>> work done by the collective volition.
>>>
>> But wouldn't the FRPOP only honor a coherent wish that allowed it to
>> fulfill its primary directive of protecting/serving humans?
>
> The "primary directive" is the collective volition, not
> protecting/serving humans or anything else the programmers would need
> to get morally right on exactly the first try.
>

But arriving at that very collective volition requires an extrapolation
from the full essential state of all (different) human beings. If
this is not possible then the FAI would happily optimize on its
extrapolations from original limited information. This could all too
easily result in our end in short order.

>> If the collective coherent wish was for the FRPOP to bug out, would
>> it do so?
>
> Yes. That's the *whole point*.
>

The trouble then is that I see no way the FRPOP could actually arrive
at a valid "collective volition".

-s



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT