From: Christian L. (n95lundc@hotmail.com)
Date: Tue Mar 27 2001 - 16:10:02 MST
Mark Walker wrote:
>
> > Personally, I feel that it will probably be impossible to "hardwire"
> > anthropomorphic morality and reasoning into a seed AI and expect those
> > goal-systems to remain after severe self-enhancement by the transcending
>AI.
> > The resulting SI would be an utterly alien thing, and any speculation
>about
> > its actions would be futile. Hence my slight irritation regarding
> > discussions about the Sysops do:s and don't:s.
> > Since it is my belief that the post-singularity world will be
>unknowable,
>my
> > definition of long-term is on the order of 20-25 years. My guiding
> > principles is reaching singularity as fast as possible. If you want to
>call
> > that ethics, that's fine with me.
> >
> >
> >
> > There will ONE relevant entity. This entity will IMO relate to humans as
>we
> > relate to bacteria. We do not make stable associations with bacteria.
> >
> > Again, the unknowability assumption makes it impossible to predict
>anything
> > IMO.
> >
> >
>I have some sympathy with your point that epistemology ought to proceed
>ethics but I think a lot more needs to be said about the unknowability
>assumption. Here is a very very very rough schema for fleshing out the
>unknowability assumption:
>Cognitively speaking:
>
>0. There is no overlap between us and SI
>1. Minimal overlap: We hold to the same principles of logic as SI.
>
>Beliefs Desires
>2b. Beliefs in basic physics (+1) 2d. game theoretic assumptions
>(+1)
>3b Belief in basics of special 3d. same ethical concerns
>(+2d
>+1).
> sciences (biology etc.) but also have desires
>about things beyond our ken.
> (+1, 2b) but
> have belief about things
> beyond our ken.
>4b Share all our beliefs about the 4d. Share all our desires--they
>just process
> world--they just think faster them faster.
>
>Presumably you mean something above the fourth level. (This would be what
>is
>sometimes called weak super intelligence. The SI can think faster than us
>but we can come up with the same answer if we plod along).
Correct, this does not seem likely. This would be something like an uploaded
human slave running at turbo-speed. I think the SI needs new ways of
thinking, not just faster thinking.
>Presumably you mean something more than level three. The idea here would be
>that our viewpoint might be (roughly) a proper subset of the SI viewpoint
>in
>the way that say an average 10 year old's viewpoint is (roughly) a proper
>subset of an average adult.
>Presumably you mean something more than level two since even here we are
>imaging there is a partial overlap in our most basic beliefs and desires.
>Presumably cannot mean level one either since even at this point we would
>share knowledge of some logical truths.
Well I do think that logical and mathematical truths are universal, so it
would be strange if the SI did not share these. The same would be true for
basic physics. However, this has nothing to do with caring about human
desires.
>Your argument seems to presuppose that the null level best describes our
>cognitive relation--I take this to be the upshot of the bacteria analogy.
>Do
>you have good evidence that this MUST be the case? Myself, I think that the
>attempt to make SIs is an experiment where we do not know for certain where
>on the 0 to 4 scale our "children" will land.
I haven't got a shred of evidence. As far as I can tell, we cannot know
where on the scale the SI will come out, like you say, or if the scale is a
good representation. This is what I meant by my assumption of unknowability.
I did not mean that it would be certain that the resulting SI would be
totally alien, even if my personal feeling is that it would. There is quite
a difference between a human with an IQ of 70 and another of IQ 170, so I
guess there would be a substantial difference between IQ 150 and IQ 10^50.
This said, I feel that I have to read "Friendly AI" before going any further
with the Friendliness issue.
>This being the case it makes
>sense to be anthropomorphic (as you say) and due what we can to ensure
>friendly AI. We can due this even if we believe that it is possible (but
>not
>certain) that all our efforts to this end may be like the bacteria's
>attempt
>to determine our kingdom of ends, i.e., that all our efforts may be full of
>sound and fury, signifying nothing.
Sure, I don't think that trying to influence the seed AI in the direction of
Sysop can be bad in any way. The worst thing that can happen is that the AI
thinks "hmm... I don't need these goals anymore" and that's it.
>So, at minimum your argument needs to
>show complete transcendence level 0.
Like I said above, my argument was that you cannot be sure about what the SI
will be like.
>If the SI's transcendence is only
>partial then there is still hope for having a hand in the future.
It never hurts to try...
/Christian
_________________________________________________________________________
Get Your Private, Free E-mail from MSN Hotmail at http://www.hotmail.com.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT