From: Ben Goertzel (firstname.lastname@example.org)
Date: Thu Jan 27 2005 - 21:20:13 MST
> MARC'S "GUESSES" ABOUT FAI AS AT JAN, 2005
> (1) What the Friendliness function actually does will
> be shown to be equivalent, in terms of physics, to
> moving the physical state of the universe closer to
> the Omega point with optimum efficiency.
On the face of it, creating an AI that doesn't destroy humanity is a lot
different problem than moving the universe toward the Omega point.
But what you seem to be saying is: The best approach to serving humanity is
NOT to focus on preserving humanity in the immediate future, but rather to
focus on bringing about the Omega point, at which point humans will live
happily in Tipler-oid relativistic-surrealistic-physics-heaven along with
all other beings...
According to your approach, a (so-called) "Friendly AI" might well wipe out
humanity if it figured this was the best route to ensuring humans come back
to frolic at the Omega point...
Well, sure, I guess so. But I'm tempted to put this in the category of:
"After the Singularity, who the f**k knows what will happen? Not us humans
with our sorely limited brains!"
I don't place that much faith in contemporary physics -- it would be mighty
hard to get me to agree to annihilate the human race in order to help
manifest the Omega point, which to me is a fairly speculative extrapolation
of our current physics theories (which I stress are *theories* and have
never actually been tested in a Tipler-oid Big Crunch scenario -- maybe when
that scenario actually happens we'll be in for some big surprises...)!
-- Ben G
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:50 MDT