Re: Questions about any Would-Be AGI

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Tue May 21 2002 - 09:43:55 MDT


I can't really contribute as many questions as I'd like, but here are a few
candidates:

What kind of things does your AI learn?
What kind of things are preprogrammed?
What kind of things are invented on the spur of the moment?
Do the answers to these questions reflect structural distinctions - i.e.
different kinds of cognitive content within the AI?

Is there a systemic distinction between feature structure, category
structure, and event structure? (Of course this question requires some
theoretical background.)
Is there a representational distinction?

What kind of combinatorial patterns does your system contain?
Do they combine through blending or structures or both?
How large are the individual patterns?
How large are the combinations?
When the patterns combine, do they yield a new, larger representation of the
same type, or do they yield a different kind of representation? In the
latter case, can the combined representation be transformed back into a
bigger building block?

What needs to happen in order for your system to notice an implication
between two events?
What needs to happen in order for your system to notice an implication
between two perceptual features of an object?

How does your system track object-part hierarchies in perceptual data?

Can your system imagine perceptual data of the same kind as is produced by
its sensory capacities?

Does your system originate actions in virtual or real environments?

Does your system originate internal actions?
Does your system have a reflective "sensorimotor" environment in which it
can learn skills and perceptual categories?

How does your system choose between actions?

How does your system learn realtime skills?
How does your system learn realtime reflective skills?

When your system learns something, what are the specific ways in which the
learned content and the experience of learning contribute to solving similar
future problems?

...actually, I think I'd better stop now.

-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT