Re: ethics

From: fudley (fuddley@fastmail.fm)
Date: Thu May 20 2004 - 10:02:50 MDT


Eliezer Wrote:

>Too little information about something I build?

Yes, exactly. You only know about the seed AI, that’s all. If you knew
nothing about oak trees except the DNA sequence in an acorn do you think
it would be a trivial task to calculate what the resulting tree would
look like? Could you even tell it was a tree not a bush? And an oak tree
is not constantly modifying and upgrading itself, a AI is.
 
>> Me:
>> the fact that with just a few lines of code I can write a program
>> that behaves in ways you can not predict does not bode well
>> for the success of your enterprise.

>Eliezer:
>Why?

Because there was nothing unique in the example I cited, there are lots
of similar enigmas. Somewhere in that huge planet sized brain there is
likely to be at least one (probably trillions) tiny sub sub sub routine
that acts in ways you can not predict.

>I have no idea what you think "intelligence" is

I have no detailed definition of intelligence and a doubt one will ever
be developed that can be written on a paper smaller than a mid sizes
galaxy. I do however have examples. Intelligence is the thing Einstein
had and television preachers (and Sea Slugs) do not. On a personal note,
for me intelligence is the most exciting thing in the universe,
optimization processes sound like a bit of a bore. Do you really want to
make a super AI, sorry, super optimization process that is lap dog, the
slave of Human Beings? Is that even ethical?

>Didn't you just get through saying to me that you didn't understand
>"intelligences"? How are you making all these wonderful predictions
>about them?

I admit I was just guessing when I said a super AI would probably want to
be happy; I still think it’s a good guess but the only thing I’m sure of
is that at least some of the things an AI with the brain the size of a
planet will want I will not understand.

>An optimization process can have complicated ends, and can find
>novel means to those ends. What I would guarantee is that they will be
>good ends, and that the novel means will not stomp on those ends.

How on earth could you guarantee such a thing? I don’t see how your
mentally shackled machine could ever come up with something really new
but for the sake of argument let’s say it could; If your machine had been
the one who discovered nuclear fission how could it know if it would turn
out to be something good or bad? And even if you can somehow magically
hardwire unchangeable axioms into the system you’d just be inviting
disaster. If you say the machine must absolutely positively do X, no
exceptions, sooner or later it will find itself in a situation where
doing X is imposable, and then your machine will go into an infinite
loop.

>An FAI doesn't share the characteristics of "hugely complicated programs"
>as you know them. It may be a complex dynamic, but it's not a computer
>program as you know it.

Huh?!

John K Clark

   

-- 
http://www.fastmail.fm - Email service worth paying for. Try it for free


This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:22:36 MST