From: J. Andrew Rogers (andrew@ceruleansystems.com)
Date: Mon May 24 2004 - 12:46:47 MDT
Marc Geddes wrote:
> More likely any general intelligence necessarily has
> have: a 'self', consciousness, some degree of
> observer centeredness, some non-altrustic aspects to
> its morality, some input from the 'personal' level
> into its morality, and helping the world would only be
> a *secondary* consequence of it's main goals.
I'm not really an SIAI fanboy, but is apparent even from my own
theoretical perspective that your assertions are almost certainly
incorrect. These things aren't "necessary" in many reasonable
theoretical models. Some of the things you mention will be exhibited
with high probability in evolutionary systems like biology, but there is
nothing *requiring* their expression, and expression can be suppressed
in the design if desired. AGI presumably will have its characteristics
engineered rather than evolved.
On what basis are you asserting that suppressing the expression of these
characteristics in an AGI is "impossible"? I cannot find a good
theoretical basis to make such assertions.
j. andrew rogers
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT