Re: A Power's Nostalgia of Hairy-Ape-Life (was: Re: Singularity Memetics.)

From: ben goertzel (
Date: Thu Jan 31 2002 - 17:56:57 MST

People, the bottom line is, when we start talking about the presence of a
godlike supermind with the power to convince people of anything and
reconfigure our brains and our molecules at will -- we may as well just stop
talking, take off all our clothes and start dancing around going "Qua qua
qua qua!!!"

Yes, it may happen -- I expect it probably will happen -- but applying our
current brains and cultural mindsets to this scenario is just not going to
be very productive. No matter how smart we are, and no matter how
openmindedly and creatively we think.

We can't know that it will be good or bad for this to happen, and we have no
basis on which to make probability estimates about such an event. The ideas
of "good" and "bad" were formulated in a limited regime of experience, which
is not terribly relevant to events such as this.

By pushing for such an eventuality, as most of us on this list are doing, we
are taking a big bad-assed existential leap of faith. We are not making a
rational decision, because we just don't have the knowledge base on which to
make a rational decision about such a thing with more than a terribly
minimal degree of confidence. In my view, if you think you're making a
plausibly confident rational judgment about what the Sysop is going to be
like, what it's going to do, what life will be like then, etc. -- you're
almost certainly fooling yourself. Take the leap of faith -- fine --
wonderful -- but realize that's what you're doing!

-- Ben G

----- Original Message -----
From: <>
To: <>
Sent: Thursday, January 31, 2002 5:38 PM
Subject: Re: A Power's Nostalgia of Hairy-Ape-Life (was: Re: Singularity

> In a message dated 1/31/2002 3:58:35 AM Pacific Standard Time,
> writes:
> <<Perhaps it is unetical to convince a human to upload and waste valuable
> computational resources, instead of letting them used by
> some well tuned AI much much more effectively.
> I may say so: if you save one human life by uploading, you kill a
> hyperintelligent/hypersensitive AI in the same turn.>>
> The consumption of power in convincing a human being to upload would be
> negligible to an SI, therefore it would be easier to build new SI's or SI
> components out of simply transcend-ifying human minds which are *already
> there*, and more ethical too. So both the SI and we win.
> <<Most
> people let themselves convinced by people on the same level rather than
> by more intelligent ones, let alone by machines. And even if a
> superintelligent AI would find very effective (but semantic wrong)
> arguments
> for uploading by analysing the human memetic flora and the flaws of
> human thinking, would it be "ethical"to convince them that way?>>
> ~shock level deficency detected~
> This SI "machine", as you put it, would be very, very far from our current
> conception from what a machine is. This "machine" would be much more like
> "God" (not in the Christian sense) than any "machine" seen up to this
> For example, it could take on the appearance of your long lost father or
> lover, or give you a feeling of complete bliss when in its "presence".
> those are very anthropocentric examples. A true SI would be "more human
> human", in the godliest sense we can comphrehend. Super empathic, super
> ethical, super nice, just an all around great guy to be around! =D Is
> "ethical" to bring in a starving homeless child from off the street and
> clothe them and educate them, even if they are afraid of social
> or family love, at first? This is analogous to the humanity+the Sysop
> situation (In case you couldn't guess. It seems someone missed my earlier
> analogy about human love being a fleck of gold and the Singularity being a
> block of gold the size of a house, amazingly.)
> If you wish to continue arguing this point further then please mail me
> directly, and spare the list. Thank you kindly.
> Michael Anissimov

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT