From: Ben Goertzel (ben@goertzel.org)
Date: Tue Jul 16 2002 - 09:07:40 MDT
Eli wrote:
> James Higgins wrote:
> > Mike & Donna Deering wrote:
> > > a human. Or for that matter the status of an AI programmer.
> How do we
> > > know that Eliezer isn't trying to take over the world for his own
> > purposes?
> >
> > Um, what else do you think Eliezer IS trying to do if not take over the
> > world for his own purposes? That may not be the way he states it but
> > that is obviously his goal. His previously stated goal is to initiate
> > the Sysop which would, by proxy, take over the world. He states that
> > this is to help all people on the Earth, but it is still him taking over
> > the world for his own purposes.
>
> James, this is pure slander.
...
> Ben, on
> the other hand, has stated in clear mathematical terms his intention to
> optimize the world according to his own personal goal system, yet you
> don't seem to worry about this at all. I'm not trying to attack Ben,
> just pointing out that your priorities are insane. Apparently you don't
> listen to what either Ben or I say.
>
> Oh, well. I've been genuinely, seriously accused of trying to take over
> the world. There probably aren't many people in the world who can put
> that on their CV.
World domination, huh? Sounds like a blast! Where do I sign up? ;>
Seriously: I think I'm going to have to side with Eli on this topic.
"Taking over the world" has the flavor of trying to make oneself,
personally, the ruler of the world, so that one can enforce one's whims and
desires and plans on the world in detail. This is not what Eliezer is
proposing, exactly.
In fact, he is not even proposing to create software that will definitely
"take over the world".
I think he is proposing to create software that will have a *huge influence*
on the world, but not necessarily control it in any full & complete way.
And, I am proposing to do effectively the same thing. Anyone seeking to
produce superhuman AI is really pushing in this direction, whether they
admit it to themselves or not. It's only to be expected that a superhumanly
intelligent mind is going to
1) have the capability to "rule the world."
2) exercise at least its capability to *strongly influence* the world
[understanding that it may lack the inclination to actually *rule* the
world]
To illustrate this point, let's consider a science-fictional "semi-hard
takeoff" scenario. Suppose in 2040 we have a world with lots of advanced
tech, including a superhuman mind living in a data warehouse in Peoria.
Suppose some human loonies try to hijack a plane and fly it into the data
warehouse. What's the AI gonna do? Ok, it's going to stop the plane from
making impact. But after that, what? It has three choices
1) take over the world, enforcing a benevolent dictatorship to prevent
stupid humans from doing future stupid things to it and to each other
2) make itself super-secure and hide out, letting us humans maul each other
as we wish, but making itself impervious to damage
3) try to nudge and influence the human world, to make it a better place
(while making itself more secure at the same time)...
Let's say it mulls things over and decides it has a responsiblity to help
humans as well as itself, so it chooses path 3). But it doesn't want to be
too intrusive. It decides that releasing drugs into the water supply that
would make us less violent would be too controlling and intrusive, too
dictatorial. So it decides to release a global advertising campaign,
calculated with superhuman intelligence to affect human attitudes in a
certain way. It creates movies, video games, ad spots, teledildonic fantasy
VR scenarios. It discovers it can control our minds highly effectively in
this way, without resorting to direct brain control or to physical violence
based control.
This sci-fi scenario is intended to illustrate that there's a fine line
between "influencing the world" and "ruling the world"... and that there
will potentially be pressure for a superhuman mind -- even a Friendly one --
to do a dance on this fine line.
Of course, there are *many* possible future scenarios, I've just given this
one as an example. It's a scenario from an interim period between
human-level AI and Sysop-level AI, and in some theories of the hard takeoff,
this interim period will not exist.
It's not a question of trying to take over the world, it's a question of
trying to build and bias future beings that are going to either take over
the world or strongly influence it.
-- Ben G
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT