Re: AGI thoughts was[AGI Reproduction? (Safety)]

From: Charles D Hixson (charleshixsn@earthlink.net)
Date: Sat Feb 04 2006 - 13:52:04 MST


On Saturday 04 February 2006 01:43 pm, Rick Geniale wrote:
> P K wrote:
> >> From: "nuzz604" <nuzz604@gmail.com>
> >> Reply-To: sl4@sl4.org
> >> To: <sl4@sl4.org>
> >> Subject: Re: AGI Reproduction? (Safety)
> >> Date: Fri, 3 Feb 2006 20:15:20 -0800
> >>...
> > 1) AGI theory will give a clearer picture of how FAI can be
> > technically implemented.
> > 2) AGI work can have semi-intelligent tools as offshoots that, when
> > combined with human intelligence, enhance it (ex: human + computer +
> > Internet > human). We could then work on FAI theory more efficiently
> > (and AGI aswell).
>
> Finally somebody is hitting the target.
> Also, the problem of the hard takeoff is fake. It has never existed. It
> pertains only to SF (I will explain better this point on our site).

That's an interesting assertion. I think it quite likely to be correct, but
I'm far from certain that it is in all scenarios. I would be quite surprised
if you could, in fact, prove that it is impossible, as it could be argued
that humanity was a hard take-off, at least as far as, e.g., mammoths were
concerned. You could argue that "But mammoths weren't involved in the
development of people", however there are many extant systems that no human
understands (groups of people may understand them, but no single person
does). Any AI designed to manage such a system will, necessarily, evolve a
"mind" that is, in at least some respects, superior that that of the people
who operate it. At this point it is still a special purpose AI (probably
with lots of modules utilizing genetic algorithms, to allow it to adapt as
the system that it's managing changes). Then someone decides to add an
additional capacity to the existing program. This takes a few rounds of
debugging with, of course, the system itself, monitoring the programs to
ensure that they won't cause it to fail, and assisting in the design...which
WILL be outside of the understanding of any one person. (Note I don't say
beyond...but the people who might understand it aren't the ones doing the
development. Think Microsoft Studio templates for a rough example.) At this
point the AI adds a few changes to increase it's capabilities. This scenario
happens repeatedly, with the AI getting stronger every time. At some point
it "wakes up", but when it wakes up it not only already has a mind
considerably stronger than that of any individual person, it also has a
leverage: Even if people realize that something has gone wrong, the cost of
taking it down is comparable to, say, the cost of dismantling the traffic
controllers at all the airports. Or possibly more like destroying the
control system at a nuclear plant. It takes a lot of careful thought and
planning to even decide that this is the correct option...and while you're
doing this, the AI isn't sitting still. At this point the only interesting
question is "What are the goals and motives of the AI?" Most likely what it
really wants to do is the things that it was designed to do, so if you're at
all lucky you get a hard takeoff that isn't terribly damaging. (I.e., you
end up with a super-intelligent AI, all right, but it has goals that don't
conflict with most human goals. It might even be willing to help you design
a way to control it. [E.g., in one scenario the AI is an automated
librarian, that has been extended to find any relevant literary reference,
computer code, media transmission, etc. from a search of all stored knowledge
and with even very poorly formulated initial statements of the question.
This would eventually imply that it had to "understand" everything that
anyone had ever created. But it wouldn't be particularly aggressive, or even
more than mildly self-protective.] In this case you get a "hard takeoff",
because you go from non-aware AIs to a superhuman, fully informed, AI with
one program change. The *rate* of transition is ... well, it's
discontinuous. But the goals of the AI that results are what's crucial.)

I notice that you consistently say AGI, and that I say AI. Perhaps this is
the crucial difference in our viewpoints. I don't think that any such thing
as "general intelligence" exists. I assert that rather than a general
intelligence there are many specialized intelligence modalities that tend to
share features. The addition of a new modality to an existing AI can, I
feel, yield a discontinuity in the capabilities of that AI, but one never
reaches the point of a truly general intelligence. (I suspect that this
might even be proveable via some variation of Goedel's proof that a set of
axioms beyond a certain power could not be both complete and consistent. I
don't think that *I* could prove it, but I do suspect that it's susceptible
of proof.)

>
> >> ,,,



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:55 MDT