Re: From chaos to minds

From: Gordon Worley (
Date: Wed Feb 28 2001 - 13:27:47 MST

At 5:33 AM -0500 2/28/01, Mark Walker wrote:
>One way to think of the problem is whether minds are all of a species or a
>genus. If minds are all of a species then your elaboration of the celestial
>analogy holds. If minds form a genus then from your celestial analogy we
>might have to say that what develops are different types of minds,
>supernovaminds, neutronminds, blackholeminds, darkminds, etc. (Or to scale
>it down, gravityminds, electromagnetic minds, strong and weak minds). Which
>it is I don't know. But the conservative hypothesis when coding an AI would
>be to assume the genus hypothesis so as to be vigianlt that it is friendly.

What I am proposing is that the end result of minds will always be
the same thing. I agree that there may be gravityminds and the like
along the say to the paragon mind, but there will be only one kind of
paragon mind. You must disregard the internal workings that get to
that point. No matter whether one views art on a computer screen or
from a canvas, the art looks the same. A mind, regardless of it's
underworkings, will be the same. After a few minutes of moving into
the SI realm, the mind should develop to the same end point that it
would in any other civilization (of course, there could always be
mistakes and one particular SI might misdevelop, but, on the whole,
SIs will reach the same end point in mind development).

Now, if we find that there are many mindish things out in the
universe the story will be different (not in the same genus, but each
one a genus). We will have to worry about dealing with something
other than a mind, but very much like this. It may even turn out
that we don't have minds at all, but something close to it. I know
I'm being vague, but I'm just starting to formulate these ideas, not
to mention a lack of experience with anything other than a mind found
on Earth. Give me a few decades and I'll have the answers. ;-)

When coding an AI, why should we have to look out for other mindish
things? If we (I noticed that I've been using 'we' a bit lately, and
by 'we' I mean human or human derived AI, SI, etc.) are doing the
coding, all that we have experience with are our own minds. Now,
there might be an accident, but when it happens I doubt that we'd be
able to know that it's a different mindish thing that we need to deal
with differently. I still haven't bought into the whole Friendliness
deal yet (though that may change as I read more of that paper ... ),
so, to me, I don't really care if a mindish thing is Friendly or not.

Gordon Worley
PGP Fingerprint:  C462 FA84 B811 3501 9010  20D2 6EF3 77F7 BBD3 B003

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT