RE: of possible interest: interaction with autonomous agents

From: Lee Corbin (lcorbin@tsoft.com)
Date: Tue Jun 28 2005 - 00:33:29 MDT


H C writes

> Control isn't a valid measure of intelligence.

That's right; earlier I called it a signal characteristic. I mean
to say that it's highly correlated with intelligence.

> What is a valid measure of intelligence is optimization. Those
> that best optimize their desires are truly the most intelligent.

Well, be wary of definitions yourself. It runs into trouble with
what you say below.

> For example... Those that make significant progress towards a safe
> Singularity or towards molecular nanotechnology are the most intelligent,
> because they are actually working to change their environment to optimally
> satisfy their desires.

I doubt that this is a good inference. Suppose that another person
judges that it's all hopeless and an evil AI will take over, or he
estimates that it won't happen until he's frozen, or (despite his
intelligence) figures that his own talents won't help all that much,
or figures that galactic contact is imminent, or... any number
of things. This person *could* turn out to be right even though the
majority here wouldn't bet on it. In any case, we really can't draw
valid conclusions about his intelligence.

Of course, one may rightly say that those working on these goals we
admire are, other things being equal, more intelligent than those
who are not. But it's just a statistical relationship.

> I'd speculate that lower organisms probably optimize their desires
> even better than we do, but have simpler, more transient desires.

Well, this is where your statement above runs into trouble. Few
will follow you in concluding that those lower organisms are more
intelligent than humans.

Lee



This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:22:57 MST