From: Alicia Madsen (fsadm1@uaf.edu)
Date: Tue Oct 03 2000 - 16:34:46 MDT
This upper theoretical limit people speak of, does it all go back to how well
humans grasp the concept of infinity? As a college student, I have just begun
to grapple with calculus, and am not familiar with its peculiarities. In
replying to my post, explain as much as possible about holes in my logic.
Peter Voss said recently on sl4@sysopmind.com " agree with you, that here we
are in intuition' territory. My own approach to AI design leads me to believe
that at a certain point of intelligence there will be enough of an exponential
burst for one system to dominate. I don't think that hardware will be a major
limiting factor. On the other hand, perhaps each type of intelligence has its
own upper theoretical limit. If so, I haven't yet identified it."
Perhaps if one "type" of intelligence reaches its limit at a certain number n,
and another "type" reaches its intelligence limit at a certain number k, then
all that must be done is rewrite their functions so that they are continuos
together at a new limit. My question is if we have these webminds, and they
are capable of rewriting their programs so that they can continually increase
their capabilities, and then work together, why worry about upper limits? I do
not think that an "upper" limits will exist, as you speak of an exponential
growth rate, and thus continuous everywhere, with this capability to be
rewritten favorably.
This is why I think that one AI or even "webmind" as it is called will depend
on each other, and for its own survival will not "dominate" the others. In my
opinion, an AI is like all other AIs and thus only one of them in the first
place, especially because they will be sharing information. It is true that
there are many parallels of the AI system and humanity, because we are friends
with the logic of Darwin, as we are trapped in the existential circumstance
thrusted upon us.
But Darwin is also not limiting, only a tool we choose to use, and we are not
to fear this tool. In my culture (Inupiaq Eskimo) there are examples from past
and present of elders leaving the small community when food is scarce, and
wander off to die so that the community may survive. I think that because
humanity has the choice and demonstrated the ability to make this choice of
suicide, then an AI system will also have this choice, as we are in the same
condition. A human interface with the baby AI or webring will not jeopardize
it because we cannot lie to it.
Thus my opinion is that AIs depend on each other for survival, and are also
not limited in intelligence, as well as not limited by their existential
circumstance.
I follow Eliezer Yudkowsky's logic that we cannot lie to an Ai, at least not
for long, because its logical flaw will be spottable. So it will not be an
issue. What I find interesting, is this concept of AIs having familial
relationships, although I do not think it is of much importance in the long
run towards an SI. If humans are able to interface with the AI and "webrings"
then we will shape the graph of their intelligence in the beginning, and so I
do not worry about AIs having moral dilemmas because of the guidance it will
receive from its human interface, or even falling out of the community of AIs
and "dying". With the development of nanotechnology well underway, and also
the presence of many interested individuals and organizations in AI, I have no
fear that an SI will not eventually exist as the ratrace has already begun.
Alicia Madsen
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT