From: m.l.vere@durham.ac.uk
Date: Mon May 15 2006 - 13:29:51 MDT
Quoting John K Clark <jonkc@att.net>:
> <m.l.vere@durham.ac.uk>
>
> > I do believe you are anthropomorphising the AI.
>
> Yes of course I'm anthropomorphizing the AI, it is a useful tool, sometimes
> the only tool, in predicting the behavior of other beings. And although a
> Jupiter brain will have many characteristics that are different from ours
> some will be in common; both Mr. Jupiter and I will prefer existence to
> nonexistence and pleasure over pain. And if you want the AI to be useful
> it's going to need something like the will to power just like people do.
Who is Mr Jupiter. If Mr Jupiter is a posthuman who was originally a human
being - perhaps so. If he/she/it is a FAI built along the lines which SIAI
advocates, then I disagree.
> > We are only concerned with our own wellbeing because that is how evolution
> > programed us. We program a FAI to concern itself whith whatever we want.
>
> Even today with our simple machines computers often behave in ways that we
> don't like and don't fully understand, the idea that you can just tell an AI
> to obey us and it will keep doing so for eternity is crazy, because that
> would entail outsmarting something far smarter than you are.
No, if done right, it would entail the AI outsmarting itself, and having its
only motivation be to continually do so. Obviously it could go wrong - but to
me this seems like our best shot of gaining maximum benefit from the
singularity.
> And even if you had an obedient slave AI it wouldn't be the top dog for long
> because somewhere else an AI would develop that isn't hobbled by human
> wishes and overtake it.
Nope. The obedient slave AI would use its enormous power to prevent anything
of similar power from being built - in order that it (and by extension its
master(s)) would remain top dog.
> Imagine if a human being suffered a mutation that
> caused him to care more about sea slugs than his own life or that of his
> children, do you imagine such a mutation would come to dominate in the gene
> pool?
(Assuming hard takeoff) the first superintelligent AI will heve no competitors
so this is not a valid analogy.
> You seem to think an AI who had such a bizarre obsession with humans
> would be viable; I don't because even in the transhuman age the laws of
> evolution will not be repealed.
Yes they will, (as before) a FAI sysop gains absolute power, has no
competitors, and uses its absolute power to prevent competitors from emerging.
Then everything proceeds to its will, as opposed to the laws of natural
selection.
> > a FAI will essentially be a (unimaginably powerfull) optimisation process,
> > and lack many of the things that make us human.
>
> An AI would lack meat, but that's about all.
No, I dont think that would be true. For one thing it would lack emotions,
also the complex, evolved anticipation of pleasure/pain reward mechanisms. It
would simply be an optimisation process - with enormous intelligence, however
far less complexity on the most basic level of how this was applied than
humans.
> > Its not a slave in the traditional sense as being subservient is what it
> > most wants.
>
> I don't believe it's possible so the moral question is probably moot, but I
> must say I find the idea a little creepy.
Fair play. I dont.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT