From: Christian Szegedy (szegedy@or.uni-bonn.de)
Date: Thu Jan 31 2002 - 04:58:23 MST
DarkVegeta26@aol.com wrote:
> I think the more likely "universal concepts" to remain will be 'fun',
> in some sense, more 'understanding' in some sense, and more 'increased
> amount and speed of information processing/complexity' in *some*
> sense. I think it would be considered unethical to a Next-Level
> entity to *not* convince a human (which it could do quite easily) to
> accept the uploading/transcension process.
Perhaps it is unetical to convince a human to upload and waste valuable
computational resources, instead of letting them used by
some well tuned AI much much more effectively.
I may say so: if you save one human life by uploading, you kill a
hyperintelligent/hypersensitive AI in the same turn.
> Only an unethical, subhuman AI would allow this to happen without a
> fight. And AI's always win in fights with humans, perfectly and
> physically harmlessly in cases of intellect/memetics. (The "uploading
> is the better way" meme, supermemetically engineered by a
> hyperconscious AI. It's like memes on crack.)
>
> Michael Anissimov
Please tell me, who are more effective in memetics: highly intelligent
scientists or dyslectic politicians? Which meme is more popular in the
beginning of the 21 century: the rationalism or the superstition? Most
people let themselves convinced by people on the same level rather than
by more intelligent ones, let alone by machines. And even if a
superintelligent AI would find very effective (but semantic wrong)
arguments
for uploading by analysing the human memetic flora and the flaws of
human thinking, would it be "ethical"to convince them that way?
Christian Szegedy
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT