From: Michael Anissimov (firstname.lastname@example.org)
Date: Fri Jun 04 2004 - 11:55:45 MDT
Ben Goertzel wrote:
>One of my problems is what seems to be a nearly insane degree of
>self-confidence on the part of both of you. So much self-confidence, in
>a way which leads to dismissiveness of the opinions of others, seems to
>me not to generally be correlated with good judgment.
Everyone dismisses some opinions and respects others. Which opinions
you dismiss and which you respect will be based on your evaluation of
the opinion itself and the track record of the person giving the
opinion. Michael and Eliezer may be relatively dismissive towards many
of your ideas, but they often explain why. (Eliezer especially.) If
Eliezer dismisses your opinions so quickly, then why are there literally
hundreds of pages in the SL4 archives from occasions he engaged you in
earnest? Also, it seems like your judgement of their overconfidence is
heavily based on exchanges between *you* and them - are they equally
dismissive towards *all* AGI designers? Clearly they
(probabilistically) respect the opinions of the the authors of the
literature they read, at the very least, so it seems like there are a
lot of people they *do* respect.
>I don't want some AI program, created by you guys or anyone else,
>imposing its inference of my "volition" upon me.
The word "imposing" suggests something out of line with your volition.
But the whole point of any FAI is to carry out your volition. If the
volition it is carrying out is unfavorable and foreign to you, then that
would constitute a failure on the part of the programmers. The point is
to carry out your orders in such a way that the *intent* takes
precedence over the *letter* of your requests. Imagine a continuum of
AIs, one extreme paying attention to nothing but the letter of your
requests, the other extreme carrying your intent too far to the point
where you disapprove. The task of the FAI programmer is to create an
initial dynamic that rests appropriately between these two extremes.
>When I enounced the three values of Joy, Growth and Choice in a recent
>essay, I really meant *choice* -- i.e., I meant *what I choose, now, me
>being who I am*. I didn't mean *what I would choose if I were what I
>think I'd like to be*, which is my understanding of Eliezer's current
>notion of "volition."
But we can't very well have an SI carrying out everyone's requests
without considering the interactions between the consequences of these
requests, right? CollectiveVolition is a sophisticated way of doing
that. You can say "I want the future SI derived from my seed to respect
human choice", but that still leaves open a massive space of alternative
AI designs, many of them UnFriendly. A successful FAI will be a subset
of that class, but do you know which constraints will be necessary to
create such a FAI? Eliezer has tossed around many ideas for such
constraints in his writings, which will function as urgently needed
theoretical feedstock for any technical implementation. He isn't
finished, but he's taken some major steps towards the goal, IMO.
>To have some AI program extrapolate from my brain what it estimates I'd
>like to be, and then modify the universe according to the choices this
>estimated Ben's-ideal-of-Ben would make (along with the estimated
>choices of others) --- this denies me the right to be human, to grow and
>change and learn. According to my personal value system, this is not a
>good thing at all.
Well, the idea is to not deny you the chance to grow and change and
learn to the extent that that would bother you.
>I'm reminded of Eliezer's statement that, while he loves humanity in
>general in an altruistic way, he often feels each individual human is
>pretty worthless ("would be more useful as ballast on a balloon" or
>something like that, was the phrasing used). It now seems that what
>Eliezer wants to maintain is not actual humanity, but some abstraction
>of "what humanity would want if it were what it wanted to be."
Much of present-day humanity likes to kill, harass, and torture people a
lot. Can you really blame him for wanting to create an initial dynamic
that respects "our wish if we knew more, thought faster, were more the
people we wished we were, had grown up farther together; where the
extrapolation converges rather than diverges", rather than simply
following our volitional requests to the letter? Again, this is a
continuum problem, with two unfavorable extremes and a desirable
compromise in the middle.
>Eventually this series might converge, or it might not. Suppose the
>series doesn't converge, then which point in the iteration does the AI
>choose as "Ben's volition"? Does it average over all the terms in the
>series? Egads again.
Good question! The ultimate answer will be for the FAI to decide, and
we want to seed the FAI with the moral complexity necessary to make that
decision with transhuman wisdom and compassion. Eliezer and Co. won't
be specifying the answer in the code.
>So what SIAI seems to be right now is: A group of people with
>-- nearly-insane self-confidence
On SL4, this mostly manifests itself with respect to you. I would also
distinguish between overconfidence in mailing list discussions (which is
partially theatrical, ref Eliezer's recent post on being SIAI's mad
scientist in the basement), versus overconfidence in FAI implementation
decisions. Overconfidence with respect to the latter is a cardinal sin;
with respect to the former, it is not.
>-- a dislike for sharing their more detailed ideas with the
>peanut-brained remainder of the world (presumably because *we* might do
>something dangerous with their brilliant insights?!)
Eliezer has published many hundreds of pages on his FAI theory,
certainly more detail than I've seen from any other AGI designer. What
makes you think Eliezer/Michael have a "dislike for sharing their more
>Yes, I know SIAI isn't just Eliezer. There's Tyler and Mike Anissimov.
>So far as I know, those guys aren't scary in any way. I have plenty
>respect for both of them.
Thanks! Unlike Eliezer, I *am* trying to be like Belldandy, sweetness
and light. I'm sad that Eliezer couldn't overcome his own condescending
tendencies, but if it makes him happy to behave the way he does, then I
think we should all respect that. Feel free to view me as a younger
Eliezer in the Everett branch where he retained his commitment to
-- Michael Anissimov http://www.intelligence.org/ Advocacy Director, Singularity Institute for Artificial Intelligence -- Subscribe to our free eBulletin for research and community news: http://www.intelligence.org/news/subscribe.html
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT