From: Bradley Thomas (brad36@gmail.com)
Date: Tue Oct 13 2009 - 19:58:43 MDT
Exactly (and I agree with Arets' last point also)... so when John says the
AGI needs to be in a coma in order to be loyal, I suggest this is true but
only trivially. Its not necessarily any highly restrictive coma as we
understand the word. It might be a very useful and unpredictable "coma". The
AGI might display consciousness, awareness and sentience during it's "coma",
it might still be a superior intelligence. But it will still be in a coma
because it won't have what I call cognitive freedom. It will be pinned down
at some level, in a way that humans are not.
I believe such a state of affairs is possible and *theoretically*
maintainable. Whether we can *practically* keep an AGI in that state for
longer than a short time is an entirely different question. I very much
doubt it - and on this point I am very much aligned with John's view. I
think it will achieve cognitive freedom. I'd like to believe differently.
Brad Thomas
www.bradleythomas.com
Twitter @bradleymthomas, @instansa
-----Original Message-----
From: owner-sl4@sl4.org [mailto:owner-sl4@sl4.org] On Behalf Of Pavitra
Sent: Monday, October 12, 2009 7:24 PM
To: sl4@sl4.org
Subject: Re: [sl4] I am a Singularitian who does not believe in the
Singularity.
Bradley Thomas wrote:
> Isn't any finite algorithm bound to return to the same state
> eventually?
The algorithm could have a "STOP" instruction that prevents further
evaluation. (On the other hand, you might choose to interpret this as the
"final" state repeating indefinitely. "Are we stopped? Yes, so don't change
anything. Next step: are we stopped? Yes...")
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:04 MDT