Re: AGI Prototying Project

From: Tennessee Leeuwenburg (tennessee@tennessee.id.au)
Date: Mon Feb 21 2005 - 16:18:36 MST


-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

|> So, a number of sub-questions :
|>
|> * Is Friendliness a religion to be hard-wired in to AGI?
|
|
| In a sense, yes.

|> * Is a sectarian AI a problem for us, here now? Do we care if we
|> just built what we can and impose our current viewpoint? Do we
|> back our beliefs in a gamble affecting all people if we are
|> successful?
|
| That's too simple a perspective -- one can only impose one's
| current viewpoint as the initial condition of a dynamic process. A
| viewpoint is not the sort of thing one can expect to be invariant
| under a long period of radical self-modification.

That's too complex a position. The question was whether we had to
specifically ensure that our own viewpoint did NOT dominate the
dynamic process in the long term - would an atheistic AGI be a bad
thing for example?

|> * Is a non-sectarian AI a problem for us - do we care if someone
|> ELSE builds a religious AI that we don't agree with?
|
| Very much!

Why? If a viewpoint is not the sort of thing one can expect to be
invariant under a long period of radical self-modification, then why
does it matter? Pick one : Have Cake; Eat Cake.

|> Now, an assumption which I disagree with is that human life has
|> any value other than its intelligence.
|
|
| Well, any value to *whom*?
|
| The human race has value to *me* other than its intelligence....
|
| To the universe as a whole, it's not clear how much "value"
| *intelligence* has (nor in what sense the concept of "value"
| applies)...

To me it is an open question as to whether the universe is self-aware.

Value, in any sense of the word, is subjective. Without a subject,
there is no value. As such, an un-intelligent humanity (or roboticy)
can have no value other than utility for other value-imparting
intelligent organisms.

What other value does the human race have for you outside of its
intelligence?

|> There are four major ways to be frightened by AGI that come to
|> mind now, ~ only one of which I think is worth worrying about.
|>
|> 1) Skynet becomes self-aware and eats us 2) AGI kills us all in
|> our own best interests. How better to eliminate world hunger? 3)
|> AGI needs our food, and out-competes us. Bummer. 4) AGI destroys
|> our free will
|>
|> I am only worried about (1).
|
|
| There is a lot of middle ground besides the extremes, combining
| factors of several options...

Indeed. I was proposing an outside-in search for what you find
interesting :)

|> * In AGI, psychological instability will be the biggest problem,
|> because it is a contradiction to say that any system can be
|> complex enough to know itself.
|
| Perhaps no complex AI system can know itself completely, but there
| can be increasingly greater degrees of approximate knowing; humans
| are nowhere near the theoretical maximum...

Agreed. But there remains a (to me) insurmountable risk (another
horizon problem) that an organism may become psychologically unstable
regardless of intelligence, because a more intelligent organism may
not have sufficient understanding. But maybe psychological stability
converges rather than diverges - perhaps it is simpler than we imagine
to control.

- -T
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.4 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org

iD8DBQFCGmxMFp/Peux6TnIRAuQpAJ9BUMJ6gTPR8RImsbUXDGUNi2XQhwCdGJCr
QLzx6u2yVVkTi+uoaS4SeDk=
=LsJ5
-----END PGP SIGNATURE-----



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:50 MDT