Re: LOGI Question

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Tue Apr 23 2002 - 21:25:16 MDT


Brian Phillips wrote:
>
> Eliezar,
> Have been reading the Levels of Organization in
> General Intelligence and am wondering...

LOGI! Now why didn't I think of that instead of LoOiGI, which was so
dreadful that I had to replace it with DGI in the URL? Oh well...

> Are you envisioning a GAI that essentially has a
> code-deciphering/manipulation metafunction that is
> analagous in it's "mind"/"brain" schema to the role our
> visual system plays in primates?

Roughly. I think there may be profound reasons why code is inherently less
amenable to "modalitization" than vision, but humans with *no* codic
modality at all probably aren't optimal.

> Am I understanding that right? If so won't this thing be... well..
> profoundly Alien?

Yes.

> (Not that I'm implying malevolent...just..well
> alien.) Granted I'm talking out of a clinical background here,
> but ...
> Imagining something, with the general intelligence of.. oh say
> a chimp, with that architecture which is analagous to monkey
> visual cortex as functions for working/functioning in an
> code environment.. I can see such a "infrahuman" GAI
> as being insanely difficult to communicate with. Trying to
> communicate with evolutionarily-derived biological infrahumans
> is bad enough and they "grok" the same environment we do.

Biological infrahumans (what a lovely term!) aren't adapted to linguistic
communication, period - see Terrence Deacon's "The Symbolic Species". The
difficulty with communicating with chimps is not (just) that they are
infrahuman in general intelligence, but that they lack the cognitive
architecture for linguistic communication and deliberation.

Communicating across a different modality set isn't where the real Alien
aspects come from, I think. A human might never be able to understand the
code the AI writes, but hopefully the AI will still be able to describe, in
abstract terms, the module's overall purpose. Programmers can communicate
with nonprogrammers what a module does, even if they can't communicate how
it works. If we're lucky it will turn out that Real Programmers can
communicate with the human poseurs.

I think that different modalities will actually turn out to be relatively
small sources of Alienness compared to, for example, using much smaller
concepts in longer serial chains of deliberation. I think that
communication between humans is profoundly based on a shared understanding
of what is "obvious" in a deliberation sequence, and that it may turn out to
be extremely difficult for either humans or infrahuman AIs to figure out
what is "obvious" to the other, or at least figure it out fast enough for
realtime communication.

> Has it occured to you that it might take a "transhuman" AI
> to carry on a "human-level" conversation with you?

I think a human-equivalent AI should be defined as one which, in carrying on
a conversation, shocks you with its brilliance at least as often as it
shocks you with its stupidity. This is because I don't think there's much
chance of a human balance of domain competencies. An AI that *never* sounds
stupid is probably going to be a transhuman.

> Assuming it could somehow be motivated to try?

Seems like a pretty straightforward subgoal of Friendliness to me.

> Would such an entity
> even be likely to have an ego or "sense of self" as we would
> understand it?

Ego, no; self, yes, but certainly not as we understand it. Introspection,
goals, and social modeling will all be profoundly different, not just from
the human design, but from any evolved design.

Hope you find this reassuring...

-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:38 MDT