Re: Military Friendly AI

From: Brian Atkins (brian@posthuman.com)
Date: Thu Jun 27 2002 - 21:01:56 MDT


Ben Goertzel wrote:
>
> Brian Atkins wrote:
>
> > James Higgins wrote:
> > >
> > > I would tend to worry very little if Ben was about to kick off a
> > > Singularity attempt, but I would worry very much if you, Eliezer, were.
> >
> > That's quite odd since last I checked Ben wasn't even interested in
> > the idea of Friendliness until we invented it and started pointing out
> > to SL4 exactly how important it is.
>
> Quite the contrary, Brian. I -- like Pei Wang, Minsky, Peter Voss, and many
> other AI researchers -- have been thinking about Friendliness for many
> years. Since Eliezer was in diapers -- and in Minsky's case, since before I
> or Eliezer were born! These are not new ideas. The term "Friendly AI" is
> due to Eli, so far as I know, but the concept certainly is not.

I'll have to disagree, and say that many of the ideas in CFAI are AFAIK
novel. Give him a little credit for inventing more than just the term. And
as for all the thinking you and others did, I don't see that it produced
much in the way of results during the period before CFAI was published.
Where's all the prior art, and why aren't we debating that stuff here, and
comparing it to CFAI to see which is better?

>
> Over the last 15 years, I have chosen to focus my research work, and my
> writing, on the creation of real AI, rather than on the Friendliness aspect
> specifically. This is not because I consider Friendliness unimportant. It
> is, rather, because -- unlike Eliezer -- I think that we don't yet know
> enough about AGI to make a really detailed, meaningful analysis of the
> Friendly AI issue. I think it's good to think about it now, but it's
> premature to focus on it now. I think we will be able to develop a real
> theory of Friendly AI only after some experience playing around with
> infrahuman AGI's that have a lot more general intelligence than any program
> now existing.

Which tends to strike me as a dangerous approach.

>
> I believe my attitude toward Friendliness is typical of AGI researchers.

Unfortunately, yes not many people seem to as careful as we all would like
when playing around with existential technologies.

> It's not that no one but Eliezer realizes the issue exists, or is
> important -- it's not that he brought the issue to the AI community's
> intention. It's rather that he's nearly the only one who believes it's
> possible to create a detailed theory of Friendliness *at this stage* prior
> to the existence of infrahuman AGI's with a decent level of general
> intelligence.
>
> Personally, I think he's largely wrong on this; I think that his theory of
> Friendly AI is not all that valuable, and that it will look somewhat
> oversimplistic and naive, in hindsight, when we reach the point of having a
> powerful infrahuman AGI.
>
> The idea of self-modifying AI causing exponentially increasing intelligence
> is also something AI researchers have been talking about for years -- Minsky
> since the 70's or earlier. What distinguishes Eliezer is not his
> understanding of the long-term relevance of this issue, but the fact that
> he's one of very few AI researchers who thinks that this issue is worth
> paying a lot of attention to *now*. Most AI researchers, rather, believe
> that only once we have an infrahuman AGI with a lot of intelligence, does it
> make sense to pay a lot of attention to intelligence-increasing
> self-modification.

So can your position be summarized as: we'll build our AI, get it working at
some subhuman level, and then when we guess it needs it we'll stop running
it for a while until we figure out how to ensure "Friendliness"? I think
your protocol needs to be fleshed out for us further so we can feel more
comfortable with your plans.

>
> Now, no one has proved they know how to construct an AGI. It is possible
> that Eliezer is correct that it makes sense to spend a lot of time on these
> issues *now*, before we have a decent infrahuman AGI. But it is not right
> to claim that others don't understand these issues, or think they're
> serious, just because they think the task of creating a decent "real AI"
> should come temporally first.

It sounds dangerous to me (and I guess others here) to build the AI first,
and let it run for some time without any special F features built in. How
will your protocol ensure that it does not take off, and if it does how
are we ensured it will turn out ok?

>
> I note that, while Eli has been focusing on these topics, he has not made
> all that much observable progress on actually creating AGI. He has
> performed a valuable service by bringing ideas like AI morality and AI
> self-modification to a segment of the population that was not familiar with
> them (mostly, members of the futurist community who are not AI researchers).
> But by making this choice as to how to spend his time, he has chosen not to
> progress as far on the AI design front as he could have otherwise.

Can we please stop talking about Eli Eli Eli for a few minutes? Thank you

>
> > Not that it seems to have had much
> > effect since he still has no plans that I know of to alter his rather
> > dramatically risky seed AI experimentation protocol (basically not
> > adding any Friendliness features until /after/ he decides that the
> > AI has advanced enough) (he has a gut feel you see, and there's certainly
> > no chance of a hard takeoff, and even if it did he's quite sure it would
> > all turn out ok... trust him on it)
>
> I think that it is not possible to create a meaningful "Friendly AI" aspect
> to Novamente at this stage. I am skeptical that it's possible to create a
> meaningful "Friendly AI" aspect to any AI architecture in advance, before
> one has a good understanding of the characteristics of the AI in action.

Why not at least build in some kind of "controlled ascent" features?

>
> I do trust my intuition that there is no chance of Novamente having a hard
> takeoff right now. The damn design is only about 20% implemented! We will
> know when we have a system that has some autonomous general intelligence,
> and at that point we will start putting Friendliness-oriented controls in
> the system. Putting this sort of control into our system now would really
> just be silly -- pure window dressing.

So at exactly what stages of the development do you plan to implement which
F features? You have some sort of protocol, right?

>
> You may say "Yeah, Ben, but you can't absolutely KNOW the system won't
> achieve a hard takeoff tomorrow." No, I can't absolutely know that, and I
> can't absolutely know that I'm not really a gerbil dreaming I'm an AI
> scientist, either; nor that the universe won't spontaneously explode three
> seconds from now. But there's such a thing as common sense. There are a
> dozen other people who know the Novamente codebase, and every single one of
> them would agree: there is NO chance of Novamente as it is now, incomplete,
> achieving any kind of takeoff. It does not have significantly more chance
> of doing so right now than Microsoft Windows does. I am sure that if Eli
> saw the codebase as it now exists he would agree -- not that it's bad, it's
> just very incomplete.

This discussion is not about right now, it is about later.

>
> >I guess it is because we go to the effort to
> > put our plans out for public review and he sits in with the rest of the
> > crowd picking them apart. At least we _have_ plans out for public
> > review.
> >
>
> Eliezer has a much more detailed plan for AI friendliness than I do, but in
> my view it's sort of a "castle in the air," because it's based on certain
> assumptions about how an AI will work, and Eliezer does not have a detailed
> design (let alone an implementation) for an AI fulfilling these assumptions.
> The whole theory may be meaningless, if it turns out it's not possible to
> make (or even thoroughly design) an AGI meeting the assumptions of the
> theory.

If your design is incapable of supporting such features, and you have been
unable to come up with your own seemingly impregnable way to keep your AI
"Friendly" throughout its development into superintelligence, then maybe
we should be getting worried?

I assume that if you get your working infrahuman AI, and are unable to
come up with a bulletproof way of keeping it "Friendly", you will turn it
off? How will you judge whether or not it is safe to continue allowing it
to grow?

>
> Also on that page you will find a link to an essay I wrote on "AI Morality".
> (Eliezer and some others pointed out some minor flaws in that paper, which I
> have not yet found time to correct, but it still basically represents my
> views.) I do not give a detailed theory of Friendly AI comparable to
> Eliezer's there, but I do explain generally how I expect AI morality to
> work, and discuss some of the issues I have with Eliezer's ideas on Friendly
> AI. I stress that this is something I've thought about "in the background"
> for a long time, but NOT something that has been a major focus of my work
> lately, because of my believe that the right way to do Friendly AI will only
> be determinable via substantial experimentation with early-stage infrahuman
> AGI's.

Well do you think it's worth our trouble to read it? If so I'd like to see
some discussion about it (perhaps Eliezer will allow you to repost the flaws
he saw in it) since I don't recall any threads regarding it (if I've
forgotten, someone please give me a URL to the archives, thanks).

>
> > How about we set July for picking Ben's plan apart. After all he is far
> > closer to completion (he claims) than anyone else, yet few people here
> > seem to have anywhere near as good a grasp of his ideas compared to
> > SIAI's.
> >
> > Disclaimer: this post is not intended to start any kind of us vs. them
> > phenomena. It exists simply to point out a perceived important difference
> > in the amount of critical discussion regarding the two
> > organizations' plans.
>
> Regarding picking my ideas on Friendly AI apart, that sounds like a fun
> discussion! However, I will be on vacation from July 1-11 (though I will
> check e-mail occasionally); hence I suggest to postpone a long and detailed
> thread on this until mid-July when I get back.
>
> Regarding picking the Novamente AI design apart, unfortunately a really
> detailed thread on that will have to wait until sometime in 2003, when the
> book comes out. There is a lot of depth there, much more than most of the
> readers of the first draft saw (due to the flaws of the first draft), and a
> detailed discussion of the design among a group who doesn't *know* the
> details of the design, is unlikely to be productive.
>

I agree, at this time I'm more interested in discussing your FAI ideas and
experiment protocol. We can take a break while you're gone.

I just started reading your AI Morality paper, I'm sure I'll have more
comments later, but this part is a bit scary I guess to everyone here who
is afraid of the initial AI programmers having too much control over the
AI's final state:

  "But intuitively, I feel that an AGI with these values is going to be a
   positive force in the universe – where by “positive” I mean “in accordance
   with Ben Goertzel’s value system”."

http://www.goertzel.org/dynapsyc/2002/AIMorality.htm

-- 
Brian Atkins
Singularity Institute for Artificial Intelligence
http://www.intelligence.org/


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT