Re: AI debate at San Jose State U.

From: Michael Wilson (mwdestinystar@yahoo.co.uk)
Date: Thu Oct 20 2005 - 15:34:05 MDT


Woody Long wrote:
> If this is true then "friendliness theory" is trying to eliminate
> the field of strong AI itself, and by extension the singularity.

This misunderstanding appears to arise from a confusion of 'human
equivalent' in the sense of 'can do anything a human can do', and
'humanoid intelligence' in the sense of 'has a similar cognitive
architecture and particularly motivational system to humans'. The
two are not the same thing, or even closely related.

> I don't think that is what the Singularity Institute is trying
> to do. In fact the website says you are trying to build a fully
> human mind.

The SIAI web site states an intention to build a strong AI. This
relies on the former definition of 'human equivalent', not the
latter one. This statement is semi-deprecated anyway given
Yudkowsky's apparent preference for 'really powerful optimisation
processes', but 'strong AI' does have the virtue of being an
existing well-known term.

> All SAI by definition will have an awareness of self and
> self interests.

Awareness of self, yes, simply because there are many things that
humans can do that require self-awareness (though not the specific
human kind of self awareness; you wouldn't want to cripple an AGI
with human reflective limitations anyway).

Self interest, no. I can't precisely refute this without a
specific description of the behaviour or mechanisms you think are
necessary, but basically there are no fundamental goals that all
intelligences have to have. It's true that evolution tends to
produce intelligences with particularly kinds of goals, but that's
just one particular kind of intelligence-producing process.

> It is because of this self interest driven human intelligent
> behavior that it will be able to build cities on the moon without
> human intervention, etc.

'Build cities on the moon' doesn't sound much like self interest
to me; why would an AI need 'cities' instead of just sensor
platforms and compute nodes? Regardless, if you want an AI to build
cities on the moon, simply put that goal into its goal system. An
attempt to instill 'self-interest' will just lead to goal drift and
arbitrary (probably bad) results.

> Military SAI, no by definition. Corporate built, consumer SAI, yes,
> as self-destructing, non-harming (non-defective) consumer desired
> products, by way of the profit motive.

This sounds like a 'shock level 2' view of what AI can /do/. 'Consumer
desired products?' 'Profit motive?' These things may lead to the first
seed AI being built, deliberately or accidentally, but once in
existence the only real constraints on its action will be the laws of
physics and the content of its goal system.

> I thought it was our job to educate the public on the coming
> friendly SAI / Singularity, how it will be safe-built, and the huge
> science and engineering benefits we humans will enjoy because of this
> non-toxic, non-defective, well-engineered, and safe-built
> technological product.

That may have sounded like a good idea when it looked like there would
be a longer interval between being taken seriously and the evaporation
of life as we know it. There's plenty of debate about how fast we can
go from infrahuman AGI to human-equivalent (bad term, I know) AGI, but
the general premise of this list is that transhuman AGI and rapid
departure from anything we can predict will follow pretty quickly
after human-equivalent AGI. Whether this means seconds or years
depends on your position on 'hard takeoff', but even 'years' would be
far too fast for our major social institutions to react or even for a
nontrivial portion of humanity to gain a clue about what's going on.

 * Michael Wilson

                
___________________________________________________________
How much free photo storage do you get? Store your holiday
snaps for FREE with Yahoo! Photos http://uk.photos.yahoo.com



This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:23:16 MST