Creating a Positive Transcension

From: Ben Goertzel (ben@goertzel.org)
Date: Fri Feb 13 2004 - 22:01:44 MST


In an off-list conversation with Michael Vassar, he has raised some very
valid and simple points regarding my futurological extrapolation.

What I take from my conversation from him (which is not exactly the same as
his perspective) is as follows.

Firstly: Having human governments work in collaboration with an AI big
brother would lead to profound corruption (and consequentially to
existential risks) UNLESS these governments were truly democratic in a sense
that arguably is not the case right now

Therefore, two possibilities exist for getting an AI Big Brother in place:

A) Create a more democratic, less corrupt political system here on Earth,
and then have the AI Big Brother created as a part of this system

or

B) Create an AI that's several times smarter than humans but NOT oriented
toward unlimited iterative self-improvement, and use it to take over the
world, hence imposing AI-Big-Brother-dom on the world

Michael thinks that AI's are more dangerous than MNT or biotech, and hence
he suspects that a better course would be to create an AI-free Big Brother
using MNT and biotech or some other future technologies. However, I feel
strongly that the task of being Big Brother is too hard for any human or
group of humans, and that AGI would be needed to pull it off.

Please note, I am not a fan of fascism. The kind of AI Big Brother I am
discussing is one whose ONLY function is to prevent humans from doing things
estimated likely to incur significant existential risks. And its purpose is
not to halt progress toward the Singularity, but rather to give humanity
more time to do careful research on self-improving AI, theoretical ethics,
and related issues, so as to be able to launch our Transcension in the most
careful possible way.

However, there are two major problems with this kind of AI Big Brother:

A) Creating a more democratic, less corrupt political system is very, very
hard

B) Taking over the world is very risky in many ways

I.e., each of the two routes mentioned above is very problematic

For these reasons, it will really be much nicer if early experiments
indicate that creation of an AI Buddha is a responsible thing to do...

Well, isn't reality pesky???

Some version of these thoughts will go into the revised version of the
essay. (These are really just minor variations on the thoughts in the essay
already, of course.)

-- Ben G



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:45 MDT