From: Nick Tarleton (nickptar@gmail.com)
Date: Sun Nov 25 2007 - 12:08:28 MST
On Nov 25, 2007 1:35 PM, John K Clark <johnkclark@fastmail.fm> wrote:
> "Harry Chesley" chesley@acm.org Wrote:
>
> > I believe I get it now: you mean that the AI
> > is unpredictable from our perspective.
>
> The AI is unpredictable even from its own perspective, just like us
> sometimes it won't know what it's going to do next until it does it. And
> that is the only definition of the term "free will" that is not complete
> gibberish.
But it, and we, can know the ultimate goal. If you are trying to do some
complex unfamiliar task, you may not always know what specific step you will
take next, but you can predict that the task will get done (or cleanly
fail), and so can someone who knows what you're trying to do.
> "Nick Tarleton" <nickptar@gmail.com>
>
> > It is impossible to prove statements about the
> > behavior of programs in general, sure, but we
> > can still construct particular programs we
> > can prove things about.
>
> Big programs? Programs that do interesting things? Programs capable of
> creating a Singularity? Jupiter Brain category programs? Don't be
> ridiculous.
Unpredictability does not necessarily scale with complexity. My web browser
is vastly more complex than your program, but very predictable, because it
was designed to be so. Creating a predictable AI is harder, but there's no
reason to think it impossible.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:01 MDT