Re: [sl4] Complete drivel on this list: was: I am a Singularitian who does not believe in the Singularity.

From: Robin Lee Powell (rlpowell@digitalkingdom.org)
Date: Mon Oct 12 2009 - 14:52:42 MDT


On Mon, Oct 12, 2009 at 03:33:48PM -0500, Pavitra wrote:
> Eric Burton wrote:
> > On Mon, Oct 12, 2009 at 11:48 AM, John K Clark <johnkclark@fastmail.fm> wrote:
> >> On Sat, 10 Oct 2009 "Robin Lee Powell" <rlpowell@digitalkingdom.org>
> >> said:
> >>> Just for the record, John Clark is something of a local nutjob;
> >> And by the way, if I called you a "nutcase" there would have been howls
> >>
> >> of protest and demands that I be kicked off the list, but I don't mind,
> >> I'm a big boy and have been called worse.
> >
> > I'll call him a damn nutcase.
> >
> > Robin Lee Powell, where do you get off
>
> Let's put that to a more serious test.
>
>
> <flame>
> Robin Lee Powell wrote:
> > John is claiming that the math of Turing and Goedel proves
> > things that it simply, clearly, does not in any way actually prove.
> > He his making new and novel statements about important mathetical
> > theorems that are quite well understood by many, many people on this
> > list.
>
> Where is he claiming that?

- ------------------

Message-Id: <1255022723.14283.1338982997@webmail.messagingengine.com>
From: John K Clark <johnkclark@fastmail.fm>
To: sl4 sl4 <sl4@sl4.org>
Subject: Re: [sl4] I am a Singularitian who does not believe in the Singularity.
In-Reply-To: <20091008161202.GA19667@randallsquared.com>

On Thu, 8 Oct 2009 16:12:02 +0000, "Randall Randall"

> they're [FAI people] suggesting that there can and should be a highest-level
> goal, and that goal should be chosen by AI designers to maximize human
> safety and/or happiness. It's unclear whether this is possible

It's not unclear at all! Turing proved 70 years ago that such a fixed
goal (axiom) sort of mind is not possible because there is no way to
stop it from getting stuck in infinite loops.

- ------------------

Turing proved nothing of the kind; if John has a proof derived from
Turing's that shows this, I've yet to see it.

Are we done now?

> You're making an in-principle testable accusation here with no
> citations whatsoever. This is intellectually dishonest.

I didn't see the point in regurgitating a less-than-a-week old
message.

> > I can't speak for others, but I had to hand-derive both of them
> > from scratch as part of my Bachelor's.
>
> ORLY? I'm very impressed at your "mathetical" prowess. Perhaps it
> would help if you would post this supposed hand-derivation
> somewhere:

0.o

Dude, it was 15 years ago; part of my basic undergrad CS work.

> > When I say that something obviously derives from an important
> > mathematical proof, I will *show you the math*.
>
> But obviously you didn't. I wonder why.

0.o

Because I'm not claiming any novel applications of any mathematical
proofs!

> > People have tried to explain to you, several times now, that
> > neither Turing's work nor Goedel's work implies anything of the
> > kind. I don't see why I should spend my time trying again when
> > it's obvious you aren't prepared to listen.
>
> Did it ever occur to you that maybe the reason he's not convinced
> by your arguments is not that he's unwilling to listen, but
> because the arguments are wrong? Are you prepared to listen to
> him?

Tried it. Repeatedly. Over the past, what, 2 years? Given up.
Bored now.

> > The point is to make AIs that want to be nice to humans in
> > exactly the same way that humans tend to want to be nice to
> > babies.
>
> Again, you completely fail to live up to your own standards:

I have not: I have, at no point, claimed to have a mathematical
proof that such a mind is possible. I would say that humans provide
an excellent proof by existence, but that's not a math proof; I
don't have one, and I never said I did. John did, and I'm asking
him for it.

> > Seperately, until and unless you've actually formalized what you're
> > saying ... *as math* ..., it's all just talk anyways.
>
> Ignoring anyone who posted in English rather than Math would
> result in ... let's see ... _no messages_ to the list. Ever.

I only require it when someone claims to have a mathematical proof
of something important.

> I am 97% confident that I will not get kicked off the list for
> that.

So am I, nor would I want you to. Who said anything about kicking
anybody off anything for anything?

(at this point, it's very unlikely I'll reply to anything more on
this thread; it's getting my blood pressure up, and it's a total
waste of everyone's time)

-Robin

-- 
They say:  "The first AIs will be built by the military as weapons."
And I'm  thinking:  "Does it even occur to you to try for something
other  than  the default  outcome?"  See http://shrunklink.com/cdiz
http://www.digitalkingdom.org/~rlpowell/ *** http://www.lojban.org/


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:04 MDT