RE: SIAI's flawed friendliness analysis

From: Ben Goertzel (ben@goertzel.org)
Date: Thu May 29 2003 - 21:41:41 MDT


Unfortunately, I agree with Mark that there's a lot of danger in a small
inbred group of people going off and trying to whip a seed AI together, and
I also agree with Eliezer that gov't regulation is unlikely to effectively
manage the seed AI problem.

I think Eliezer is absolutely right that understanding Friendly AI is a damn
hard problem. I don't agree with his pessimistic assessment that creating a
seed AI without a full understanding of Friendliness is almost sure to lead
to dire consequences, though. I think that Matt Stewart's comment on
probability estimates is pertinent here. We just don't know! And knowing
in advance is going to be mighty hard.

I am hoping, as I've said before & will say again, that experimentation with
near-human-level AGI systems will allow us to learn a lot more about AI's
and their moralities and their dynamics under self-modification. Maybe this
will teach us enough to really seriously address the damn hard problem at
hand...

-- Ben Goertzel

> -----Original Message-----
> From: owner-sl4@sl4.org [mailto:owner-sl4@sl4.org]On Behalf Of Mark
> Waser
> Sent: Thursday, May 29, 2003 8:44 PM
> To: sl4@sl4.org
> Subject: Re: SIAI's flawed friendliness analysis
>
>
> Wow, Eliezer. I hope that you're just having a bad day and that
> this isn't
> your new public relations policy. Factually, you're mostly
> correct but the
> attitude and condescension are way over the top and you don't seem to be
> making any effort to understand what others are saying or to help them
> along.
>
> If you want a good example of a regulation that might work, try something
> that sets up an advisory board of the right people (you can even include
> yourself on it)that constantly reviews all certified projects, evaluates
> their current risks, and takes appropriate action.
>
> I don't have any faith in a small INBRED group of people going off and
> trying to whip something together without any mistakes when any mistake
> could be fatal in the biggest way possible. I want as many eyes
> as possible
> on the project. Yes, the more people who see it, the better the
> possibility
> that people might steal the ideas and try to go for an illegal AI but I
> believe that the probability of success of such is much less than the
> probability of a big problem if you don't get as many eyes as possible on
> the original project.
>
> An even better idea would be to work out an ethics system before
> you work on
> the AI system. My belief is that an ethics system should actually follow
> straight from your "rationality" project - - speaking of which,
> how is that
> going?
>
> Mark
>
> ----- Original Message -----
> From: "Eliezer S. Yudkowsky" <sentience@pobox.com>
> To: <sl4@sl4.org>
> Sent: Thursday, May 29, 2003 5:38 PM
> Subject: Re: SIAI's flawed friendliness analysis
>
>
> > Philip Sutton wrote:
> > >
> > > I would like you to explain why in language that a
> non-mathematican can
> > > understand. If you can't get around to explaining your ideas
> in a form
> > > that an intelligent, informed non-mathematician can
> understand then you
> > > are commiting yourself to fail to communicate with the people you want
> > > to pusuade not to adopt Bill's approach.
> >
> > Nature is not obligated to make her problems easy enough for
> intelligent,
> > informed non-mathematicians to understand them. But of course
> people get
> > quite indignant when told that a problem may be too difficult for them.
> > Maybe, *maybe* if it's someone like a physicist, a nice
> > already-established famously difficult area of science, someone might be
> > willing to believe that this field is too difficult to be grasped over
> > lunch. Why? Because it taps into the ready-made "witch
> doctor" instinct
> > for understanding a field as arcane and barred to outsiders.
> Lacking any
> > established witch doctors, of course, the presumption must be that your
> > opinions are as good as anyone's and that the problem itself
> is, oh, about
> > as simple as anything else your brain expects to run into.
> > Hunter-gatherers don't confront hard scientific problems. But just
> > because there isn't a field of AI with an established,
> confirmed theory of
> > intelligence, and scientists with reputations for being in a difficult
> > field, does not mean that the problem of AI will be simple. The
> > difficulty is set by Nature. I might try to explain the problem to an
> > intelligent, informed non-mathematician. But remember that Nature is
> > under no obligation *whatsoever* to make the problem comprehensible.
> >
> > If you are doing something that will, in fact, kill you, Nature is under
> > no obligation to make this obvious to you. Nature has no privileged
> > tendency to avoid killing people whenever her reasons cannot be
> explained
> > to an intelligent, informed non-mathematician.
> >
> > Now, bearing that in mind, you might start at:
> > http://intelligence.org/CFAI/design/structure/why.html
> > and go on from there.
> >
> > --
> > Eliezer S. Yudkowsky http://intelligence.org/
> > Research Fellow, Singularity Institute for Artificial Intelligence
> >
> >
>
>



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:42 MDT