From: Aubrey de Grey (ag24@gen.cam.ac.uk)
Date: Fri May 21 2004 - 05:33:24 MDT
Eliezer Yudkowsky wrote:
> >>I would presently support the flat general rule that things which look
> >>like minor problems, but which you don't quite understand, are blocker
> >>problems until fathomed completely.
> >
> > I'm very heartened at this, because I agree 100%. My difficulty is with
> > the idea that the path to this total understanding is (or even might be)
> > finite, let alone tractable, in length. But then, most biogerontologists
> > still think that about curing aging, so I remain wide open to persuasion!
>
> This is possibly a good analogy; curing aging would be extremely difficult
> if we needed to fathom biomedical symptoms of aging one by one and treat
> them (correct me if I'm mistaken).
You're quite correct.
> Curing aging looks much easier if you
> suppose there might be a small library of underlying causes, and easier
> still if you just reprogram all the cells using nanotech. In the last
> case we deal not with the problem of comprehending aging, but simply with
> the problem of creating youth.
This is not so correct. My first thought was to defer discussion of this,
but actually a further exploration of the analogy seems to lead me to see
what still troubles me in FAI, so I'll elaborate. Either with or without
sophistcated nanotech, by my "SENS" approach we do indeed avoid the problem
of comprehending aging, but we don't so much "create youth" as clear away
the barriers that our metabolism increasingly experiences to maintaining
(or restoring) youth itself. This is an important distinction, because it
means that we can get away with not only not understanding aging but also
not understanding metabolism! (Note: I use metabolism in its strict and
rather general sense here, to mean the entire network of biochemical and
cellular processes that keep us alive from one day to the next.) These
barriers are of metabolism's own adventitious making, of course, but that
isn't relevant here -- what matters is that metabolism is a system to
keep us alive, indefinitely (because it's a state machine), just not a
perfect such system. So, by analogy:
> Similarly, FAI doesn't require that I understand an existing biological
> system, or that I understand an arbitrarily selected nonhuman system, but
> that I build a system with the property of understandability. Or to be
> more precise, that I build an understandable system with the property of
> predictable niceness/Friendliness, for a well-specified abstract predicate
> thereof. Just *any* system that's understandable wouldn't be enough.
What I would like to see is an argument that there can, in principle, be
a system with the property of understandability (by at least a few 21st
century humans) and also with the property of considerably greater than
human cognitive function. (I avoid "intelliigence" because I want to try
to focus the discussion on function, and thence on the reasons why we may
find these machines worth making, leaving aside for the moment the idea
that we need to invent FAI before anyone invents unfriendly AI.)
Now, I accept readily that it is not correct that complex systems are
*always* effectively incomprehensible to less complex systems. I have
no probelm with the idea that "self-centredness" may be avoidable. But
as I understand it you are focusing on the development of a system with
the capacity for essentially indefinite cognitive self-enhancement. I
can't see how a system so open-ended as that can be constrained in the
way you so cogently point out is necessary, and I also can't see how
any system *without* the capacity for essentially indefinite cognitive
self-enhancement will be any use in pre-empting the development of one
that does have that capacity, which as I understand it is one of your
primary motivations for creating FAI in the first place. (In contrast,
I would like to see machines autonomous enough to free humans from the
need to engage in menial tasks like manufacturing and mining, but not
anything beyond that -- though I'm open to persuasion as I said.)
What surprises me most here is the apparently widespread presence of
this concern in the community subscribed to this list -- the reasons
for my difficulty in seeing how FAI can even in principle be created
have been rehearsed by others and I have nothing to add at this point.
It seems that I am one of many who feel that this should be SIAI FAQ
number 1. Have you addressed it in detail online anywhere?
I'm also fairly sure that SIAI FAQ #2 or thereabouts should be the one
I asked aerlier and no one has yet answered: namely, how about treating
AI in general as a WMD, something to educate people not to think they
can build safely and to entice people not to want to build?
Thanks for your time on this.
Aubrey de Grey
This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:22:36 MST