From: David Hart (dhart@atlantisblue.com.au)
Date: Fri Aug 26 2005 - 17:07:37 MDT
Phil Goetz wrote:
> I'm fairly confident that no AI can be built that you can guarantee
> will be friendly. Even if "friendly" could be defined, which it can't.
>
> Let's get real and stop talking about "friendly" and "unfriendly"
> when what we really mean is "free" and "slave". You can't guarantee
> friendliness; you can't even define it. You should talk instead
> about whether you can guarantee that an AI will obey you.
>
This argument is crucial and I believe cannot be overstated; to phrase
it in a different way (taken from some comments I made off-list earlier
this week): My opinion on the topic is that guaranteeing friendliness in
AIs is nearly as impossible as guaranteeing friendliness in humans --
both cases are, in reality, asking for omnipotent control over
fundamentally free agents, which is the fancy of philosophers,
politicians and military intelligence types. In technical terms, I
belive it is impossible to build static or otherwise "invariant"
goal-systems into free-agents who are capable of strong
self-modification, and that the creation or evolution of such creatures
is an inevitability on which we must plan (the only responsible,
conservative and prudent course of action). That being said, I believe
it is of prime importance to design, build, experiment with and teach AI
systems which have the highest probability of being friendly (i.e. of
being creatures of high moral standards), taking care to think hardest
about long-term future consequences of particular design and teaching
decisions, using primarily empirical knowledge gained from experimentation.
David
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT