Re: How to make a slave (many replies )

From: Thomas McCabe (pphysics141@gmail.com)
Date: Sun Nov 25 2007 - 12:30:39 MST


On Nov 25, 2007 1:35 PM, John K Clark <johnkclark@fastmail.fm> wrote:
> ME:
> >> A program that looks for the first even number
> >> greater than 4 that is not the sum of two primes
> >> greater than 2, and when it finds that number
> >> it then stops. When will this program stop, will
> >> it ever stop? There is no way to tell, all you
> >> can do is watch it and see what it does, and
> >> randomness or chaos or the environment has
> >> nothing to do with it.
>
> David Picón Álvarez wrote:
>
> > I can say when it will stop. It will stop when it
> > runs out of memory. And that moment can be predicted.
>
> Given X amount of memory you cannot predict if the machine will stop
> before it reaches that point, all you can do is watch it and see what it
> does; or flip a coin and guess.

You can still use standard Bayesian inference techniques (eg, no human
mathematician has managed to find a counterexample, making it more
unlikely that the machine will find one).

> "Harry Chesley" chesley@acm.org Wrote:
>
> > I believe I get it now: you mean that the AI
> > is unpredictable from our perspective.
>
> The AI is unpredictable even from its own perspective, just like us
> sometimes it won't know what it's going to do next until it does it. And
> that is the only definition of the term "free will" that is not complete
> gibberish.

There is a huge difference between *each action* being unpredictable
and the *end result* being unpredictable. If we knew exactly what to
do to make the Earth into a paradise, we'd have already done it.

> "Nick Tarleton" <nickptar@gmail.com>
>
> > It is impossible to prove statements about the
> > behavior of programs in general, sure, but we
> > can still construct particular programs we
> > can prove things about.
>
> Big programs? Programs that do interesting things? Programs capable of
> creating a Singularity? Jupiter Brain category programs? Don't be
> ridiculous.

Logical fallacy:
http://en.wikipedia.org/wiki/Argument_from_ignorance#Argument_from_personal_incredulity.
You have to have something better than "it sounds ridiculous!" for an
argument. Also, see
http://www.overcomingbias.com/2007/09/stranger-than-h.html.

> "Stathis Papaioannou" <stathisp@gmail.com>
>
> > Perhaps you could explain how an AI which
> > started off with the belief that the aim
> > of life is to obey humans would revise this belief
>
> Perhaps you could explain how it came to be that the beliefs of a 3 year
> old Stathis Papaioannou are not identical to the beliefs of the Stathis
> Papaioannou of today.

An AGI can revise its beliefs (eg, the Sun is hot) while maintaining
its optimization target (eg, be Friendly).

> > but an AI with the belief that the aim of
> > life is to take over the world would be
> > immune to such revision.
>
> I'm not saying it is. Perhaps Mr. Jupiter Brain

More anthropomorphicism. A Jupiter Brain will not act like you do; you
cannot use anthropomorphic reasoning.

> will think human beings
> are rather cute

More anthropomorphicism. A Jupiter Brain will not act like you do; you
cannot use anthropomorphic reasoning.

> and throw us a bone every once in a while,

More anthropomorphicism. A Jupiter Brain will not act like you do; you
cannot use anthropomorphic reasoning.

> or perhaps he
> will get nostalgic

More anthropomorphicism. A Jupiter Brain will not act like you do; you
cannot use anthropomorphic reasoning.

> thinking about the good old days,

More anthropomorphicism. A Jupiter Brain will not act like you do; you
cannot use anthropomorphic reasoning.

> or perhaps he will
> exterminate us like rats;

More anthropomorphicism. A Jupiter Brain will not act like you do; you
cannot use anthropomorphic reasoning.

> my point was that Mr. Jupiter Brain's

More anthropomorphicism. A Jupiter Brain will not act like you do; you
cannot use anthropomorphic reasoning.

> decision
> will be out of our hands.

More anthropomorphicism. A Jupiter Brain will not act like you do; you
cannot use anthropomorphic reasoning.

> If a multi billion dollar corporation can't
> make Vista secure we're not going to make a Jupiter Brain secure. (damn,
> I shouldn't have said that, now this thread is going to morph into an
> orgy of Microsoft bashing)

I agree that the problem is difficult. This does not mean that we are
somehow excused from solving it.

> John K Clark
>
> --
> John K Clark
> johnkclark@fastmail.fm
>
> --
> http://www.fastmail.fm - IMAP accessible web-mail
>
>
>

 - Tom



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:01 MDT