Re: Ben vs. Ben

From: Brian Atkins (brian@posthuman.com)
Date: Sat Jun 29 2002 - 16:49:21 MDT


Ben Goertzel wrote:
>
> So far, based on these disussions, when i rewrite the "AI Morality" essay I
> will add in the following points, with elaborated explanations:
>
> 1)
> AI morality and AI consciousness are only going to become scientific when
> they are pursued as *experimental science*, and we're just not there yet...
> until then it's just entertaining, thought-stimulating conjecture...
>
> 2)
> It's important to put in protections against unexpected hard takeoff, but
> the effective design of these protections is hard, and the right way to do
> it will only be determined thru experimentation with actual AGI systems
> (again, experimental science)

This is not good enough. No AI project should find itself in the situation
of both being in a potential takeoff situation, and simultaneously having
no mechanisms to prevent a takeoff. If you can't figure this out, then you
should never run your code in the first place. To me, this looks like
another case of your overoptimism (which is the exact opposite of what is
required when dealing with existential risks- you need to practice walking
around all the time expecting doom) leading to unnecessary risks.

>
> 3)
> Yes, it is a tough decision to decide when an AGI should be allowed to
> increase its intelligence unprotectedly. A group of Singularity wizards
> should be consulted, it shouldn't be left up to one guy.
>
> MAYBE I will also replace the references to my own personal morality with
> references to some kind of generic "transhumanist morality." However, that
> would take a little research into what articulations of transhumanist
> morality already exist. I know the Extropian stuff, but for my taste, that
> generally emphasizes the virtue of compassion far too little....

Speaking as a human who is potentially affected by your AI, this isn't
good enough for me. You'll have to come up with a better answer before I'll
willingly go along with such a plan.

>
> What I will not do in any revision of the essay -- except one written after
> significant experimentation with a Novababy has been done -- is introduce
> any definitive statement that one or another particular approach to ensuring
> Friendliness, or measuring intelligence increase, is likely to be effective.
> I feel there is just too much uncertainty in these regards, at this stage.

Any uncertainty has to err on the side of CAUTION, which means you stick
those things into your AI before you run it, JUST IN CASE, even if they
turn out not to work after testing. Something is better than nothing.
 
I'm glad of your uncertainty, but you're not handling it like you would
rationally handle it in the case of an existential risk- you're handling it
more like you would starting a business with someone else's money, and if it
doesn't work out then "oops, oh well". Not good enough

>
> Are there any other significant issues that you think should be addressed in
> the revision, Brian? Knowing what you know now about my overall point of
> view on these matters?
>

Any other legitimate things Eliezer or others pointed out to you privately
or publicly should be addressed. The issue should be looked at from all
sides. Three times. Then look at it again.

-- 
Brian Atkins
Singularity Institute for Artificial Intelligence
http://www.intelligence.org/


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT