Style was AGI thoughts was[AGI Reproduction? (Safety)]

From: Keith Henson (hkhenson@rogers.com)
Date: Sun Feb 05 2006 - 10:11:48 MST


At 08:52 PM 2/4/2006 +0000, Charles D Hixson wrote:

snip

This is good, but I have a readability request, more paragraph breaks.

>That's an interesting assertion. I think it quite likely to be correct, but
>I'm far from certain that it is in all scenarios. I would be quite surprised
>if you could, in fact, prove that it is impossible, as it could be argued
>that humanity was a hard take-off, at least as far as, e.g., mammoths were
>concerned.

>You could argue that "But mammoths weren't involved in the
>development of people", however there are many extant systems that no human
>understands (groups of people may understand them, but no single person
>does).

>Any AI designed to manage such a system will, necessarily, evolve a
>"mind" that is, in at least some respects, superior that that of the people
>who operate it. At this point it is still a special purpose AI (probably
>with lots of modules utilizing genetic algorithms, to allow it to adapt as
>the system that it's managing changes).

>Then someone decides to add an
>additional capacity to the existing program. This takes a few rounds of
>debugging with, of course, the system itself, monitoring the programs to
>ensure that they won't cause it to fail, and assisting in the design...which
>WILL be outside of the understanding of any one person. (Note I don't say
>beyond...but the people who might understand it aren't the ones doing the
>development. Think Microsoft Studio templates for a rough example.)

>At this
>point the AI adds a few changes to increase it's capabilities. This scenario
>happens repeatedly, with the AI getting stronger every time. At some point
>it "wakes up", but when it wakes up it not only already has a mind
>considerably stronger than that of any individual person, it also has a
>leverage: Even if people realize that something has gone wrong, the cost of
>taking it down is comparable to, say, the cost of dismantling the traffic
>controllers at all the airports. Or possibly more like destroying the
>control system at a nuclear plant.

>It takes a lot of careful thought and
>planning to even decide that this is the correct option...and while you're
>doing this, the AI isn't sitting still. At this point the only interesting
>question is "What are the goals and motives of the AI?"

>Most likely what it
>really wants to do is the things that it was designed to do, so if you're at
>all lucky you get a hard takeoff that isn't terribly damaging. (I.e., you
>end up with a super-intelligent AI, all right, but it has goals that don't
>conflict with most human goals.

>It might even be willing to help you design
>a way to control it. [E.g., in one scenario the AI is an automated
>librarian, that has been extended to find any relevant literary reference,
>computer code, media transmission, etc. from a search of all stored knowledge
>and with even very poorly formulated initial statements of the question.
>This would eventually imply that it had to "understand" everything that
>anyone had ever created. But it wouldn't be particularly aggressive, or even
>more than mildly self-protective.]

>In this case you get a "hard takeoff",
>because you go from non-aware AIs to a superhuman, fully informed, AI with
>one program change. The *rate* of transition is ... well, it's
>discontinuous. But the goals of the AI that results are what's crucial.)
>
>I notice that you consistently say AGI, and that I say AI. Perhaps this is
>the crucial difference in our viewpoints. I don't think that any such thing
>as "general intelligence" exists. I assert that rather than a general
>intelligence there are many specialized intelligence modalities that tend to
>share features.

While I see your point, and there is no doubt humans have *many*
specialized brain modules, I think there is such a thing as "general
intelligence." It is what we use when we don't have a specialized module
to deal with some problem.

It also isn't very good compared to the specialized modules.

We can see this in autistic people--some of whom have high general
intelligence and seem to be lacking some of the modules we use for social
interactions. Using GI in place of specialized modules is really
clunky. But if GI is being used to solve a problem never faced before , it
is better than having noting at all.

>The addition of a new modality to an existing AI can, I
>feel, yield a discontinuity in the capabilities of that AI, but one never
>reaches the point of a truly general intelligence. (I suspect that this
>might even be proveable via some variation of Goedel's proof that a set of
>axioms beyond a certain power could not be both complete and consistent. I
>don't think that *I* could prove it, but I do suspect that it's susceptible
>of proof.)



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:55 MDT