From: Ben Goertzel (ben@goertzel.org)
Date: Thu Jul 14 2005 - 07:50:56 MDT
Marc Geddes wrote:
> I’m positing that an unfriendly AI cannot self-improve
> past a certain point (i.e it’s intelligence level will
> be limited by it’s degree of unfriendliness). I posit
> that only a friendly ai can undergo unlimited
> recursive self-improvement.
I find this to be a most improbable statement...
I am curious, however, how you are construing the sense of "Friendly" here.
For instance, what about a superhuman AI whose goal is to advance science,
math and technology as far as possible?
What fundamental limits on self-improvement do you think such an AI is going
to encounter?
Can you please outline your argument, using this science-focused AI as an
example?
I might agree that there are limits to the intelligence of an AI embodying a
classic human notion of "evil", if only because once a system becomes smart
enough, it may inevitably understand how idiotic this human notion of "evil"
is. The emotional complexes underlying human evil and destructiveness may
well be tied to limitations in intelligence.
However, as Eli has ably pointed out over the years (following in the
footsteps of many prior futurists and sf authors), there are many ways for a
superhuman AI's goal system to threaten human life, even if that AI has no
"evil" in it.
-- Ben G
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT