Re: [sl4] The Jaguar Supercomputer

From: Pavitra (
Date: Mon Nov 23 2009 - 21:29:15 MST

Lizardblue wrote:
> Ok. More questions:
> Is the intelligence that has advanced humanity actually able to be
> recreated in a machine? I mean what is it. It seems to me that the
> human "intelligence" that has advanced us is comprised of much more
> than just processing power. It involves fairly tough to understand
> things like creativity, the ability to imagine, etc. What about
> motivations such as compassion, comfort, hate, love, fear of death,
> desire to defeat an enemy, etc. These seem to all be an integral part
> of what got us this far. How does this translate to the machine world?

I think it translates as good software design, which is why the
intelligence of the machine increases accelerates when the intelligence
of its programmers increases. (Mere processing power would translate as
good hardware design, which is subject to Moore's Law.)

> Also, can someone cite some examples of what might a super-
> intelligence do that would truly make our lives better?

Develop a medicine or medical technique to cure cancer; invent new
architectural techniques and building materials that would make it
possible to cheaply house and clothe all of humanity; mediate political
conflicts, figuring out mutually beneficial solutions and persuading the
parties to adopt them.

And those are all things that humans can foresee wanting. Look at
existing species intelligence gaps if you really want to see what could
happen: what can humans do to make the lives of, say, a gerbil better?
We can ensure that it always has enough food, which it knows to want;
likewise, we can keep it safe from predators. But also we can design for
it a balanced diet to keep it from disease, including inventing new
foods for that purpose. This is what a vast intelligence gap means.

> My understanding was that the goal is to download people into
> machines, to make them more capable, and mostly to make them immortal.
> Downloading people into machines seems a very different thing from
> having super AIs at our service. We would be the AI.

That sounds like a nice feature to implement, but if "make them more
capable" is finitely bounded at (say) merely a hundred thousand times
current human capacity, then it's just not going to be in the same
league as an intelligent being with maintainable source code.

Human uploads that are enhanced in part by reimplementing (aspects of?)
the brain in a more manageable form might possibly become the first
recursively self-improving artificial general intelligences. But I
suspect we're going to figure out how to replicate our black-box
behavior before we reverse-engineer our actual source.

> I see the personal benefit for individuals here, but not so much for
> humanity in general.
> Seperate AIs I see as potentially beneficial, but also as potentially
> very dangerous.
> What is the goal here? Eternal humans, supercomputers, or both?

The goal is to attain that which we do not yet know to want.
Despaghettifying human uploads would probably be the safest way to get
there, if we could somehow insure that somebody else doesn't do it the
stupid way first and kill us.

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:05 MDT