From: Michael Roy Ames (michaelroyames@hotmail.com)
Date: Sun May 19 2002 - 13:40:30 MDT
Justin Corwin wrote:
>
> As I see it, there are four reasons an AI needs an environment:
>
Your note (admittedly hastily typed) seems to meld/mix-up/confuse(?) two
concepts: an Artificial Environment, and a Sensory Modality. I have found
it very helpful to keep these two concepts very separate in my mind, when
thinking about AI Learning. What do you mean by environment in this case?
The Artificial Enviroment that we will present to the AI for early learning,
or the Sensory Modalities we will implement? Or are you talking about both
at once?
>
> 1. For training the AImind to accept input.
> 2. For allowing the AImind to develop mental skills in an interactive
> setting.(action/response kind of stuff)
> 3. Possibly for keeping the AImind close to us, in it's mental landscape.
> While it may be possible to make a mind with an entirely disembodied
> intelligence, just I/O ports and internet access, such a mind may have
> problems relating to us, as physically oriented many of our
language-objects
> are.
> 4. To allow the AImind to more effective when it begins acting in the real
> world. If it has to extrapolate 'everything' it'll take longer and be more
> error-prone.
>
I think you missed this important reason:
5. To ground concepts in physical reality. (Or a physical reality
simulation).
Although your item 4. seems to infer this, I suspect that grounding is an
essential requirement for stable mental development, almost from the start
of a Baby AI's existence. Here are two arguments to support that view:
i. Human children learn many many things by trial-and-error. Developing
humans have reality thrust at them every waking moment... those that ignore
reality are classified as Mentally Ill, and do not survive without help. I
think an AI would have similar 'survival' difficulties if ve didn't have
real-world input. (This is a brief argument, but there is a lot more that
could be said about it)
ii. A being who's perceptions are cut off from reality may develop quite
well for a while, but ve's mind's development would be based on whatever
simulation was being run, a simulation that would inevitabley be less
complex than reality. Therefore, there is a large probablility that lessons
learned in the simulation, would be invalid in reality. How much of a
problem it would be for the AI to adapt to reality later, is a crucial issue
when trying to design a "Womb Environment".
>
> There are, of course, downsides. Providing an environment for an AImind
ups
> complexity.
>
This downside is really non-existant IMO, if the AI is going to have *any*
ability to interact. Either you have to have real-world sensory modalities
(SM), or SM's that perceive an Artificial Environment. It would seem to be
about the same implementational complexity either way... why do I intuit
this? With an Artificial Environment you have to build the environment (big
complexity hit) and the SM to perceive it (moderate complexity hit). With a
real-world envronment you get the environment for free (zero work) but the
SM to perceive that environment will require more complexity (big complexity
hit).
>
> But do richer environments really bring a quantifiable advantage?
>
I would say: sometimes. I strongly doubt that your 'twice as acute' vision
example would give much of a quantifiable advantage to _thought_ processes.
Although it would no doubt be very useful in a hunter-gatherer setting.
However, other vision modification options *may* give an mental advantage.
How about these?
a) telescoping vision - allowing selectable zoom on a narrow field of
vision.
b) 360 degree spherical vision - allowing simultaneous observation in all
directions.
c) wide spectrum vision - allowing perception of selectable EM frequencies.
d) ... fill in your own favourite ...
These types of vision enhancements, and thier accompanying SM would seem to
provide new ways to 'see', or 'imagine', or 'translate' concepts in a
mind... and that's just vision!
>
> So richer environments may in fact lead to richer mental structure.
>
Agreement!
>
> I believe that in this case, complexity is too important to let go, and
the
> design hit should be taken.
>
Agreement!
>
> We don't want an AImind we have to relate to
> using 786432 pixel 2D metaphors. That would be annoying, and may represent
a
> difficulty the AI may have trouble fixing when in the Self-Modification
> stage.
>
I don't understand your point here. What is wrong with 2D metaphors? If
you can get an AI that far, then that's freakin' great! Tackle 1D first,
then 2D, then 3 and 4...
>
> Thus, environmental richness may play a crucial factor in
> allowing emergent mindstructures to emerge at all.
>
Indeed. This was one of the major points of GISAI. One needs to have
several ways of looking at the world (several SM's) in order to have a
chance of solving non-obvious problems. The ability to map one SM onto
another, and glean meaningful inferences from the new mapping, seems to be
one of the key ways that humans think. As our only model of intelligence,
we would do well to try and reproduce that human mental effect in code.
>
> Upside, really complexish environments are probably beyond
> us anyway.
>
Not for long I hope.
Michael Roy Ames.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:38 MDT