Re: [sl4] The Jaguar Supercomputer

From: Matt Paul (lizardblue@gmail.com)
Date: Mon Nov 23 2009 - 19:13:45 MST


Ok. More questions:

Is the intelligence that has advanced humanity actually able to be
recreated in a machine? I mean what is it. It seems to me that the
human "intelligence" that has advanced us is comprised of much more
than just processing power. It involves fairly tough to understand
things like creativity, the ability to imagine, etc. What about
motivations such as compassion, comfort, hate, love, fear of death,
desire to defeat an enemy, etc. These seem to all be an integral part
of what got us this far. How does this translate to the machine world?

Also, can someone cite some examples of what might a super-
intelligence do that would truly make our lives better?

My understanding was that the goal is to download people into
machines, to make them more capable, and mostly to make them immortal.
Downloading people into machines seems a very different thing from
having super AIs at our service. We would be the AI.
I see the personal benefit for individuals here, but not so much for
humanity in general.
Seperate AIs I see as potentially beneficial, but also as potentially
very dangerous.
What is the goal here? Eternal humans, supercomputers, or both?

Lizardblue

On Nov 23, 2009, at 6:53 PM, Pavitra <celestialcognition@gmail.com>
wrote:

> Matt Paul wrote:
>> Ok, this is probably gonna get me banned...
>>
>> I've been following SL4 for a while now. The discussions are
>> certainly intellectually stimulating in a "university" sense, but
>> what
>> I still don't get is what exactly the perceived value of the AI you
>> guys discuss is beyond normal scientific desire to understand. I
>> don't
>> see the practical and prudent value of a machine that acts like a
>> human brain. Fascinating and cool certainly, but I don't see the
>> actual benefits to mankind. I do see many potential problems for
>> mankind though...
>>
>> Rather than flame me for these statements, please answer my question.
>> I honestly am trying to understand the subject better.
>
> The theory goes like this:
>
> A human-level intelligence (existing software is too stupid) with
> maintainable source code (existing humans are too messy) will be
> able to
> collaborate with its programmers on further improvements to itself.
>
> Further improvements beyond "human-level intelligence" necessarily
> results in superhuman intelligence, and the more superhuman the AI
> gets,
> the more it will be able to improve itself in ways that its
> programmers
> couldn't do on their own.
>
> It starts out as a simple optimizer, perhaps, and then moves up
> through
> the ranks to intern lackey, then a programmer of average-for-a-human
> skill, then a brilliant programmer, then a genius programmer, then a
> programmer capable of feats no human could accomplish, then a
> programmer
> capable of feats no human can _understand_.
>
> At a certain point, the intelligence is so vastly superhuman as to be
> effectively a god.
>
> If we're very very careful that we know what we're doing, then that
> god
> will care about making the world a good place for humans to live, and
> will use its godlike intellect to do so.
>



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:05 MDT