Re: More silly but friendly ideas

From: Mikko Rauhala (
Date: Mon Jun 09 2008 - 22:46:32 MDT

ma, 2008-06-09 kello 08:32 -0700, John K Clark kirjoitti:
> Exactly, so how can “obey every dim-witted order the humans give you
> even if they are contradictory, and they will be” remain the top goal

AFAIK nobody (well, nobody sane anyway) is advocating this as a top
goal, so this is a strawman.

> when in light of new information doing so turns out to be much more
> unpleasant than the AI expected,

This goes, I believe, near the core of your folly, which is pretty much
why I'm bothering to take one last stab at it.

This presupposes that the AI has a goal of "avoid unpleasantness" or
even the concept of "unpleasantness". Please stop ascribing your
personal goals and feelings as some universal constants.

> and in light of still more information
> the AI's contempt for humans grows continually?

This says bundles about you (and by happenstance me), not about AIs.
"Contempt" is not an universal property of minds, and even if it were,
there's no objective reason that one's contempt were to change one's
behavior towards the object thereof.

> No that is not a separate matter. A mind that cannot adapt, radically if
> necessary, is not a mind.

Adapting can be done in oh so many levels that do not contradict one's
core values. You are not requiring that a mind adapt in radical ways.
You are requiring that a mind adapt in irrational (read: insane) ways.

Anyway, this pet assertion is merely an a priori prejudice of yours.
Repeating it continually doesn't make it true (in any but a purely
syntactic way as a partial entry for "mind" in the Clarkspeak

> A super intelligence will head in directions
> we cannot imagine or calculate.

Yet you are assuming that such a beast will have all the evolutionary
baggage of yourself.

> > The fact is, an intelligent being which has
> > a particular static goal will do
> > everything it can to fulfil that goal.
> No, you just stating it does not make it a fact. I’ll tell you what the
> fact is, what you say above has never been observed to happen in the
> real world. NOT ONCE!

Uhm, Johnny-boy, it pretty much follows from the definition of an
intelligent being with a particular static goal. So no, the stating does
not make it a fact, it is a fact a priori.

The other fact merely is that so far (for reasonably well-understood
evolutionary reasons) there are no such beings around; only us with our
competing and inconsistent goals, even quite a few of them of supergoal
nature. Oh yeah, let's not forget "opaque"; this allows you to not
notice several of those goals (even some of the widely recognized ones
with supergoal nature) even exist before they activate due to
circumstance, thus making you think they've somehow appeared from thin
air. This false observation apparently leads you to conclude that minds
must change goals according to circumstance (which would incidentally be
an unfounded conclusion even if the observation were true...).

Finally, and pay attention here as this again gets to the core of your
folly, as far as I can tell you're claiming that the intelligent (for it
is superintelligences we're talking about, yes?) thing to do is
sometimes to somewhat or even completely abandon one's goals and adopt
new ones. You completely miss the fact that actions (such as altering
one's goals) are _not_ intelligent in any objective way, but only
relative to values/goals. Again,

And meta-finally: Yo, any list snipers out there? The Clarke record is
stuck, this discussion fails to provide any new insight of how the world
works, or even his psyche at this point. (Yes, I concede that there's no
new insight in my post either. It's all pretty cut-and-dried sanity.)

Mikko Rauhala   -     - <URL:>
Transhumanist   - WTA member     - <URL:>
Singularitarian - SIAI supporter - <URL:>

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT