Re: [sl4] The Jaguar Supercomputer

From: Pavitra (celestialcognition@gmail.com)
Date: Mon Nov 23 2009 - 18:53:18 MST


Matt Paul wrote:
> Ok, this is probably gonna get me banned...
>
> I've been following SL4 for a while now. The discussions are
> certainly intellectually stimulating in a "university" sense, but what
> I still don't get is what exactly the perceived value of the AI you
> guys discuss is beyond normal scientific desire to understand. I don't
> see the practical and prudent value of a machine that acts like a
> human brain. Fascinating and cool certainly, but I don't see the
> actual benefits to mankind. I do see many potential problems for
> mankind though...
>
> Rather than flame me for these statements, please answer my question.
> I honestly am trying to understand the subject better.

The theory goes like this:

A human-level intelligence (existing software is too stupid) with
maintainable source code (existing humans are too messy) will be able to
collaborate with its programmers on further improvements to itself.

Further improvements beyond "human-level intelligence" necessarily
results in superhuman intelligence, and the more superhuman the AI gets,
the more it will be able to improve itself in ways that its programmers
couldn't do on their own.

It starts out as a simple optimizer, perhaps, and then moves up through
the ranks to intern lackey, then a programmer of average-for-a-human
skill, then a brilliant programmer, then a genius programmer, then a
programmer capable of feats no human could accomplish, then a programmer
capable of feats no human can _understand_.

At a certain point, the intelligence is so vastly superhuman as to be
effectively a god.

If we're very very careful that we know what we're doing, then that god
will care about making the world a good place for humans to live, and
will use its godlike intellect to do so.





This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:05 MDT