Re: When does it pay to play (lottery)?

From: David Clark (clarkd@rccconsulting.com)
Date: Sun Jan 23 2005 - 22:41:53 MST


----- Original Message -----
From: "Brian Atkins" <brian@posthuman.com>
To: <sl4@sl4.org>
Sent: Sunday, January 23, 2005 11:34 AM
Subject: Re: When does it pay to play (lottery)?

> David Clark wrote:
> > The chance that *any* implementation of AI will
> > *take off* in the near future is absolutely zero. We haven't the
foggiest
> > clue exactly what it will take to make a human level AI, let alone an AI
> > capable of doing serious harm to the human race.
>
> Your two statements of certainty are in direct conflict with each other,
> so I don't see how you can hold both at the same time.

I fail to see the conflict.

> If on one hand you claim there is an absolutely zero chance, then you
> must know in extremely amazing detail with 100% certainty what it takes
> for "take off". Or conversely if you don't know with perfect knowledge
> what it takes for "take off" then how can you claim a zero chance.

If a person *absolutely* doesn't know how to do something, why wouldn't that
give them zero chance of accomplishing the goal? Are you saying that I
might just luck out and stumble upon the answer and therefore can't be 100%
certain of failure? Your logic escapes me. Can you imagine accidentally
making a car? A car is far more likely to be created by accident than an AI
would be. I might agree on the possibility of an accidental take off if
*any* AI project was even close to a human level, but sadly that is
definitely not the case.

> Of course the answer is that no one should be claiming a zero chance.
> And the other answer is that we need to continue gaining more knowledge.
> But because of the unknowns, gaining that knowledge should be done in as
> safe a manner as possible.

Safety is something you will *absolutely* get if you never write any code.
I couldn't agree more with the need for more knowledge. I am only
disagreeing with the method for getting it. The only way I have ever found
to really know a software algorithm is to make a successful program of it.
I have been surprised many times by thinking a technique would work or be
fast enough, only to find out just how wrong I was.

> For some reason this reminds me of the worries at the time of the a-bomb
> that many of the physicists had about whether it would accidentally
> ignite the entire atmosphere. They didn't know at first what would
> happen, and most of them would not have told you that the possibility
> was "absolutely zero". In fact even after they ran some numbers on
> paper, they still weren't absolutely certain. But they did by that point
> have enough odds in their favor to proceed. But even at the end when it
> was tested they were still heard betting with each other on the outcome.
> There was never absolute certainty.

How is it any comparison. The people working on the A-bomb didn't just send
the first bomb over Hiroshima without any successful tests. They had many
years of very expensive and extensive tests that lead up to the ignition of
the first A-bomb. How does that compare with the effort at SIAI? I am not
putting down the effort made so far by the SIAI but please don't make
comparisons with the development of the A-bomb. Your group isn't close to
that development group by many orders of magnitude. Who knows what you
might be if the government of the USA spent the same amount of money through
the SIAI as was spent in developing the A-bomb?

> --
> Brian Atkins
> Singularity Institute for Artificial Intelligence
> http://www.intelligence.org/



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:50 MDT