From: Cliff Stabbert (cps46@earthlink.net)
Date: Tue Aug 06 2002 - 21:41:48 MDT
Friday, August 2, 2002, 3:13:07 PM, Evan Reese wrote:
ER> What I don't understand is how the AI will be able to learn much of anything
ER> while being jailed. I know of no examples of intelligence emerging without
ER> interaction with its external environment; but for fears of what might
ER> result from an unfriendly AI, it seems that it will be forced to begin its
ER> development in isolation.
ER> Clearly, some here have given this question more thought than I have - I
ER> would imagine - so how will this education be carried out while keeping the
ER> AI in jail?
I'm not much of an "AI jail" advocate, but my thoughts have been along
the line of simulated environments offering some sort of pressures, in
which an AI can learn and/or evolve. I agree that in order for us to
find an AI /useful/ it would need a lot of practical knowledge about
our environment/culture/language or what have you, in which case jail
isn't practical. But if intelligence can be abstracted from any
specific domain, the AI may well be able to develop / be evolved
in an artificial domain.
As an aside, most of my thinking on AI and how to get there has been
heavily influenced by (in approximate order of significance) Douglas
Hofstadter, Gregory Bateson, chaos theory, and (more as background
'flavor') things like On Growth and Form. I was wondering if others
have further sources in that vein/those veins, or know of others
working towards AI on such principles.
-- Cliff
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT