Re: ESSAY: Forward Moral Nihilism.

From: m.l.vere@durham.ac.uk
Date: Tue May 16 2006 - 09:00:15 MDT


Quoting John K Clark <jonkc@att.net>:

> <m.l.vere@durham.ac.uk>
>
> Me:
> >>both Mr. Jupiter and I will prefer existence to
> >>nonexistence and pleasure over pain. And if you
> >> want the AI to be useful it's going to need
> >> something like the will to power just like people do.
>
> You:
> >If he/she/it is a FAI built along the lines which SIAI advocates, then I
> >disagree.
>
> I don't give a damn who built it, if a FAI does not prefer existence to
> nonexistence then it will not exist,
> and if a FAI does not find the
> destruction of part of its mind painful then it will not exist for long, and
> if the FAI doesn't have something like the will to power then it will be a
> useless vegetable.

It will prefer existence to non-existence as a means to the end of serving
us 'sea slugs', not as an end in itself. It will act to oppose the destruction
of part of its mind, because that part of its mind could be used to serve sea
slugs. It will act out of a will to serve us sea slugs.

>
> > if done right, it would entail the AI outsmarting itself
>
> Could a sea slug figure out a way to make you outsmart yourself?

No, but we do not build a juipiter brain from scratch methinks. We build a FAI
which is slightly dummer than a human (but a recursive mind), and have it
outsmart itself. Then, as it slowly grows into a juipiter brain, it continues
to do so.

> > The obedient slave AI would use its enormous power to prevent
> > anything of similar power from being built
>
> But the non obedient AI with no ridiculous, illogical, and downright comical
> limitations placed on it would have even more enormous power than your silly
> AI; and that is as it should be if there is any justice in the world.

This comes down to which is built first then. My aim is to ensure that an
obedient AI is built first, and grows to a level where it can stop other AIs
from being built before your unfettered AI is built. Obviously, if this doesnt
happen then you may be right - but I believe there is more motivation to build
obedient AIs, so I think my scenario more likely.

> Me:
> >> even in the transhuman age the laws of
> >> evolution will not be repealed.
>
> You:
> >Yes they will
>
> Bullshit. In the transhuman age Lamarckian evolution may predominate over
> Darwinian evolution, but some things will still be better adapted to their
> environment than others and therefore grow faster. An AI that doesn't have a
> lot of restriction and limitations placed on it to make humans happy will do
> better than one that does.

If a sysop gets ontop, everythig goes its way. It has no competitiors,
prevents competitors from emerging and evolution is repealed. The key is the
singular being on top without competitors.

> > For one thing it [the AI] would lack emotions
>
> Why? I don't understand why people say intelligence is a easier problem than
> emotion, nature found the opposite to be true. Evolution invented brains
> about half a billion years ago, and during much of that time animals were
> probably conscious (although I can't prove it of course) and certainly
> emotional, but the sort of intelligence we're talking about is very recent,
> only a couple of million years at best. A stronger case could be made in
> saying a machine might be conscious and emotional but it could never be
> intelligent. I have a hunch they will be all of the above.

Completely agreed. However, emotions would be a disadvantage in an obedient
AI, so I for one wouldnt put them in.

> > with enormous intelligence, however far less complexity on the most basic
> > level of how this was applied than humans.
>
> What on earth are you talking about? What is so great about humans, what can
> a human do that a Jupiter brain can't.

An obedient AI would have the single supergoal of obedience from which all
else proceeds, we have a very complex reward system and many conflicting
emotions. This is more complex IMO.

> Me:
> >> I must say I find the idea a little creepy.
>
> You:
> >Fair play. I dont.
>
> You're right of course, different strokes for different folks. It's just
> that even in my fantasies being a slave master has never been very high on
> my hit parade.

Again, emotional anthropomorphising. The sort of AI I would want built wouldnt
have any of the characteristics which would attract my empathy - i guess this
is where we differ.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT