From: Ben Goertzel (ben@goertzel.org)
Date: Mon Apr 01 2002 - 19:42:44 MST
Carlo,
I suggest you read Eliezer's essay CREATING A FRIENDLY AI which is on the
SingInst website.
He agrees with you that AI's will be goal-oriented, and discusses the goal
of Friendliness, as distinct from the goal of creating a smarter AI.
I don't agree with all he says there, but I think it's an excellent essay
and will clarify many of the issues you are raising in your e-mail.
Once you've read his essay, then you can join the ongoing debate Eliezer and
I have been having on AI goal structures.... In brief: I think that
Friendliness has to be one among many goals; he thinks it should be wired in
as the ultimate supergoal in a would-be real AI.
ben g
> -----Original Message-----
> From: owner-sl4@sysopmind.com [mailto:owner-sl4@sysopmind.com]On Behalf
> Of Carlo Wood
> Sent: Monday, April 01, 2002 7:09 PM
> To: sl4@sysopmind.com
> Subject: Re: AI and survival instinct.
>
>
> On Mon, Apr 01, 2002 at 05:49:24PM -0700, Ben Goertzel wrote:
> > I will say again: I think this is possible, but not very
> likely. You have
> > not presented me with what I consider a convincing argument for the
> > likelihood of this possibility.
>
> That is correct, I have not given any argument actually.
> But neither have you. Nevertheless, all we can argue about
> is the likelihood and since you already say that you acknowledge
> the possibility - that is enough for me. Hopefully you agree
> with me then that staying in control should constantly be the
> highest priority and never we should take a risk that we'd
> create a intelligence greater than ours that might choose to
> exterminate us. With this in mind I think that especially
> the first AI must operate in a very controlled environment,
> until we learn more about the goals and desires or "instincts"
> of the beings we created.
>
> Finally, I do have a little argument to add, but choose to first
> post my introductory mail ;).
>
> I think that an Artificial Intelligence will have to be, just like
> humans, "goal driven". Otherwise we can not speak of 'intelligence'
> (correct me if I am wrong there).
>
> With "goal driven" I mean that a human seeks satisfaction of a
> desire, by achieving a goal. Psychologist have divided humans
> in a bunch of groups where it comes to what drives a human being.
> One of them is "security" (without doubt my drive too).
> Example: I seek security - therefore I study, get a job and get
> lots of money that I safe and keep on my bank account. Also,
> I wanted security and because was hurt often in normal social
> contacts I started an electronics hobby and became a radio pirate:
> six years of my life my social contact have been anonymously
> with only audio as contact - and that has worked for me very
> well (I was a popular guy on the radio, unlike at school).
> Later I was using IRC and I wanted to chat in a safe environment,
> because I needed security and ONLY because of that drive, I've
> improved the IRC protocol and as such made it impossible to take
> over channels anymore (I designed 'ircu', the ircd that is still
> being used on undernet IRC (when I first went there it had 10
> users, now it has 80,000)). etc.
>
> Roughly spoken, the thought process involved in order to achieve
> a goal works as follows:
>
> Random generator --filter--> neural network to figure out -->
> expected result -\
> of actions what ones expects to be the
> \__ comparation --> feedback
> result
> / |
> ^
> goal -/ |
>
> `---------------------------------------------------------------'
>
> And this doesn't work 'discreate', there is no 'clock pulse',
> it works like the feedback of an OpAmp (electronics), or
> like solving a non-lineair differential equation. That is
> what our brain is very good at: solving non-lineair equations.
>
> Of course, the above is VERY incomplete and not the subject
> of discussion. I just try to make a point here:
> If an AI will work in this way, and the 'goal' we set it
> will be to build an even faster brain, then what will happen
> when it considers being terminated prematurely?
> It would immedeately conclude that if its existence would
> end then it would *certainly* not be able to achieve its goal.
> Therefore, independant of the goal is has, it will consider
> surviving to be the highest priority and might chose to put
> most time into securing of its own continuation.
>
> --
> Carlo Wood <carlo@alinoe.com>
>
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT