From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Thu Jul 14 2005 - 13:24:57 MDT
Ben Goertzel wrote:
>>> I'd love to hear well-reasoned thoughts on what and whose
>>
>> motivation would
>>
>>> end up being a bigger or more likely danger to us.
>>
>> I think that all utility functions containing no explicit mention of
>> humanity (example: paperclips) are equally dangerous.
>
> Eli, this clearly isn't true, and I think it's a poorly-thought-out
> statement on your part.
>
> For instance, consider
>
> Goal A: Maximize the entropy of the universe, as rapidly as possible.
>
> Goal B: Maximize the joy, freedom and growth potential of all sentient
> beings in the universe
>
> B makes no explicit mention of humanity, nor does A.
>
> B admittedly is more vague than A, but it can be specified by saying that
> the AI should define all the terms in the goal in the way it thinks the
> majority of humans would define them on Earth in 2004.
>
> I really feel that B is less dangerous than A.
>
> I can't *prove* this, but I could make some plausible arguments, though I
> don't feel like spending a lot of time on it right now.
>
> Do you have some justification for your rather extreme assertion?
Your suggested AI design strategy B strikes me as a hideous mistake under the
guise of motherhood and apple pie, for reasons we have already discussed.
Aside from that, I accept your correction. All utility functions that do not
contain explicit specific Friendly complexity that attach intrinsic utility to
e.g. the life of humans, for whatever reason, are equally dangerous. Whether
the utility function reads "humans" or "sentient beings" is a separate issue;
I will concede that humans are a special case of sentient beings. Basically I
meant to say that I don't give a damn whether our future light cone ends up as
paperclips or staples.
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT