From: Samantha Atkins (firstname.lastname@example.org)
Date: Wed May 26 2004 - 00:11:18 MDT
On May 24, 2004, at 1:25 PM, Ben Goertzel wrote:
> Put concisely, one of the main problems is: If you're modifying a human
> brain to be twice as smart, how can you be sure your modification won't
> have the side-effect of causing that human brain to feel like
> irresponsibly creating dangerous seed AI's or gray-goo-producing
Well, you can't. But do you expect humanity with its current level of
intelligence, morality and so on to be able to handle MNT? How about
raising the standard of living of more than half of humanity to a level
we would consider "decent"? Dealing with highly complex technological
and ethical questions? Even managing to keep the lights on and turn
them on the in the rest of the world? It isn't hard to find aspects
of our world that are apparently beyond our present understanding and
ability to control. It isn't difficult to see that that the number
and challenges of such things are increasing - often exponentially.
> Human brain mods that don't increase intelligence dramatically are
> relatively safe in existential terms, but human brain modes that do
> increase intelligence dramatically are potentially dangerous by virtue
> of the dangerous tech that smart humans may play with.
It is the classic two-edged sword. Higher intelligence can always be
used for good or ill. If we were not already surrounded with problems
arguably beyond our abilities to resolve then it might make sense to
put off increasing human intelligence. But such is not the case.
> I'm not saying that smart humans will necessarily become evil or
> careless -- in fact I think the opposite is more closely true -- but
> it's clear that it will be hard to predict the ethical inclinations and
> quality-of-judgment of intelligence-enhanced humans.
Somehow I think this is far less of a risk and less hard to predict
than the ethical inclinations and quality-of-judgment of sentient or
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT