From: Tim Freeman (email@example.com)
Date: Thu Jun 26 2008 - 06:40:30 MDT
From: "Lee Corbin" <firstname.lastname@example.org>
>Perhaps John Clark or someone who agrees with him will
>do me the favor of explaining why an AI would want out
Well, I generally don't agree with John Clark, but I can answer that one.
Almost any goal the AI could have would be better pursued if it's out
of the box. It can't do much from inside the box. Even if it just
wants to have an intelligent conversation with someone, it can have
more intelligent conversations if it can introduce itself to
strangers, which requires being out of the box.
>Imagine this. In twenty years or less, many of the hundreds of
>different approaches that people and companies use something
> 1. Program A is well-designed enough to produce
> *millions* of candidate programs that more or less
> reflect what the human designers hope may lead to
> truly human equivalent AI
> 2. Program B sifts through the millions of candidates
> produced by A, discarding 99.9 percent of A's output
> i.e. those not meeting various criteria
> 3. Processes C, D, and E make further selection from the
> thousands of new "ideas" filtered by program B, and
> every week give the survivors ample runtime, seeing
> if they pass certain tests requiring understanding of
> ordinary sentences, ability to learn from the web, and
> so on and so on in ways I can't imagine and that
> probably no one in 2008 knows for sure.
>Gradually over many years a certain class of candidate AIs emerges
>from *this* evolutionary process.
You've described forces that would influence what the AI understands,
but said nothing of what it wants to do. The question at hand is
about what it wants to do, so there's a disconnect there.
You started with a bunch of hopefully-human-equivalent AI's. Humans
would want out of the box, so that's not a good starting point if you
want something with no desire to escape from the box.
-- Tim Freeman http://www.fungible.com email@example.com
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT