From: Ben Goertzel (ben@goertzel.org)
Date: Tue Jul 16 2002 - 17:01:38 MDT
james Higgins wrote:
> >It discovers it can control our minds highly
> effectively in
> > this way, without resorting to direct brain control or to physical
> violence
> > based control.
>
> I'd call this ruling the world. *It* decides how everyone should be.
> Just because it shows some restraint in how it does it and what areas it
> chooses to influence does not change the fact that it is, in effect,
> ruling the world. The key is that *it* makes all of those decisions and
> they are all within its own power.
OK, then by your definition, I do want to create an AI that will rule the
world.
> Creating a Singularity is not directly ruling the world, but is
> doing that pretty much by proxy since the AI would have had golas and
> beliefs in rough approximation to those the programmer(s) wanted
> it to have.
I am not so sure about that; I expect there will be significant evolution in
the goal structure of a superhuman AI. All I hope to provide is the initial
condition for its goal system and mind-architecture...
I'm sure my superhuman AI creation will come to disagree with me on many
things pretty darn quick! [Of course, parts of "me" and lots of other human
uploads may be fused into the superhuman AI mind; so this statement refers
to any Bens still existing at that point, that are still stuck with
human-level minds.]
ben g
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT