From: Olie Lamb (neomorphy@gmail.com)
Date: Tue Aug 29 2006 - 22:27:07 MDT
Note the difference between the words "would" and "could"
John K Clark <jonkc@att.net> wrote:
>> A very powerful AI may continue it's growth exponentially until certain
>> point, which is beyond our current capability of understanding,
>
> OK, sounds reasonable.
>
>> where it concludes that it's best to stop
>
>Huh? How do you conclude that this un-knowable AI would conclude that it
>would be best if it stopped improving itself?
Not would. Could.
Let's start with the basics:
There is a large set of possible goals for an intelligent entity to
have. A number of us happen to think that no particular goal is
required for intelligence. Some people frequently assert that some
goals are "necessary" for any intelligence. I've yet to have
difficulty finding a counterexample, but I'm not quite sure how to go
about demonstrating my contention...
*Calls for set-logic assistance*
A commonly-presumed-subset of possible goals includes the goal set:
"Gain & retain control of X" X may be any of a number of things. In
particular, it may be an area of space.
If, say, retaining control of a set area of space for a given duration
was incompatible with expanding the space over which one had control,
the best satisfaction of the goal set could be achieved by not
expanding the sphere of influence.
Example:
Imagine for a moment an intelligence in an area of limited resources -
say, one stuck on a rock a very very long way from any other materials
(extra-galactic+ long way). That intelligence has discovered that it
can continue operating for a very long time at a given intelligence
(Computations per second) "level". However, by consuming the
available energy on the rock at a faster rate, it would be able to
increase its its processing ability.
Would it be reasonable for that intelligence to increase its
computation rate, in the hope that it might be able to think itself
out of its predicament? Or /might/ it consider sticking with what it
had for the time being?
Or, perhaps you think that "improve" should be defined in such a way
that it means a reduction in computing power and problem solving
ability is an improvement?
> Is it common for intelligent
>entities to decide that they don't want more control of the universe?
http://en.wikipedia.org/wiki/Parinirvana
It could be inferred that some 700 million people decided just so. Or
at least a fair percentage of them. Most of them relatively sane, and
a lot more intelligent than is required for using symbolic logic. I
don't know how many humans you would need to match your definition of
"common"...
On 8/30/06, John K Clark <jonkc@att.net> wrote:
> "Ricardo Barreira" <rbarreira@gmail.com>
>
> > How do you even know the AI will want any control at all?
>
> If the AI exists it must prefer existence to non existence
Prove it!
Not for an ideal superintelligence. Prove that ANY intelligence must
prefer existence to non-existence.
I think I can imagine a few counterexamples, thus disproving the contention.
(Nb: I have noted your comments to )
> > I challenge you to prove otherwise
> Prove? This isn't high school geometry, I can't prove anything about a
> intelligence far far greater than my own;
> and after that
> it is a short step, a very short step, to what Nietzsche called "the will to
> power".
And, of course, Nietzsche is the icon of understanding
intelligences-in-general, like, say, women... *rolls eyes*
--Olie
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:57 MDT