Re: singularitarian premises

From: Daniel Radetsky (daniel@radray.us)
Date: Wed Jan 26 2005 - 23:09:55 MST


On Wed, 26 Jan 2005 17:14:52 -0800
Robin Lee Powell <rlpowell@digitalkingdom.org> wrote:
 
> > Another thing: maybe I have the wrong picture of the Last Judge
> > scenario. As far as I understood it, Last Judge necessitates an AI
> > Jail of some type. That is, the AI is operating under some type of
> > restriction (e.g. within a single mainframe, but also maybe some
> > sort of software directive "You may not do X." The specific type
> > of restriction is unimportant to the argument), which the Last
> > Judge can remove.
>
> Err, no. Not even close. The RPOP determines the CV of humanity...

Good to know. Thanks.

> > Therefore, it seems to me to be disingenuous to call your only
> > weapon a weapon of last resort. Correct me, though; I'm not very
> > well read in these areas.
> Well, sure, if nothing interesting happens to FAI theory between now
> and the CV making its decision.
>
> I consider thing to have a probability of zero.

Once again, I'm having one of those feelings where either something is
seriously wrong with me or you're being ridiculous. Consider: people run up
against problems in every field of study. Some of those turn out to be
solvable problems. Others are fundamental, unsolvable or intractable problems.
I won't give you a list, because I hope to God that people interested in a
computer-related field would know a few. Now, given that intractable problems
exist, the hole that the Last Judge kludges closed could be either tractable
or intractable. You claim that there is zero probability that it is
intractable. Bullshit. I don't see why you have reason to assume any
probability, unless you have some theory for a replacement that you're working
on.

But even if we can agree on a nonzero probability that a better solution will
be developed, it still doesn't make any sense to me to talk about the new
solution to the problem you expect to develop in X amount of time. We
criticise the current solution because it's there.

I'm thinking of something Eliezer said about being a professional paranoid and
worrying that everything will go wrong at once being the best way to ensure
that if it can go right, it will. I think that's a good policy, and I
encourage anyone else who thinks it's good to stop assuming that a better
solution to your problem can be found. I don't think that a SI will be able to
find a non-quantum polynomial time factoring algorithm, not because it's not
smart enough, but because I have no good reason to believe one exists.
Likewise, what reason do you have to assume a solution to this problem exists?

Yours,
Daniel



This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:22:52 MST