From: Ben Goertzel (ben@goertzel.org)
Date: Fri Jun 04 2004 - 12:22:14 MDT
Hi Michael,
> Also, it seems like your judgement of their
> overconfidence is
> heavily based on exchanges between *you* and them -
No, it's also based on statements such as (paraphrases, not quotes)
"I understand both AI and the Singularity much better than anyone else
in the world"
"I don't need to ever study in a university, because I understand what's
important better than all the professors anyway."
Etc. etc.
(Eli)
"A scientist who doesn;t accept SIAI's theory is not a good scientist"
"SIAI's theories are on a par with Einstein's General Relativity Theory"
"I hesitate to call the SL4 list a peanut gallery." [meaning: nothing
anyone else says on the list is of value]
(Michael Wilson)
> are they equally
> dismissive towards *all* AGI designers?
Basically: Yes, they are.
Eli seems to have a liking for James Rogers' approach, but has been
extremely dismissive toward every other AI approach I've seen him talk
about.
However, most AI folks, when obnoxiously dismissed by him, don't bother
to argue (since they have better things to do with their time, and since
he's not an "AI professional" whose opinions "matter" in academia or
industry.)
> The word "imposing" suggests something out of line with your
> volition.
> But the whole point of any FAI is to carry out your volition. If the
> volition it is carrying out is unfavorable and foreign to
> you, then that
> would constitute a failure on the part of the programmers.
The notion of your "volition" as Eliezer now proposes it is NOT
necessarily aligned with your desires or your will at this moment.
Rather, it's an AI's estimate of what you WOULD want at this moment, if
you were a better person according to your own standards of goodness.
Tricky notion, no?
> But we can't very well have an SI carrying out everyone's requests
> without considering the interactions between the consequences
> of these
> requests, right? CollectiveVolition is a sophisticated way of doing
> that.
Yes, but there's a difference btw collective volition and collective
*will*.
Collective will, collective choice -- that's commonly known as
"democracy." Yes, it's not easy to implement, but it's a
well-understood social technology.
Collective volition is different from this: more interesting but
scarier.
You don't seem to be confronting this difference...
> Well, the idea is to not deny you the chance to grow and change and
> learn to the extent that that would bother you.
The idea, it seems, is to allow me to grow and change and learn to the
extent that the AI estimates I will, in future, want my past self to be
allowed to do.
In other words, the AI is supposed to treat me like a child, and
estimate what the adult-me is eventually going to want the child-me to
have been allowed to do.
In raising my kids, I use this method sometimes, but more often I let
the children do what they presently desire rather than what I think
their future selves will want their past selves to have done.
I think that, as a first principle, sentient beings should be allowed
their free choice, and issues of "collective volition" and such should
only enter into the picture in order to resolve conflicts between
different sentience's free choices.
> Good question! The ultimate answer will be for the FAI to
> decide, and
> we want to seed the FAI with the moral complexity necessary
> to make that
> decision with transhuman wisdom and compassion.
I much prefer to embody an AI with "respect choices of sentient beings
whenever possible" as a core value.
Concrete choice, not estimated volition.
This is a major ethical choice, on which Eliezer and I appear to
currently significantly differ.
> On SL4, this mostly manifests itself with respect to you.
This is only because no one else bothers to take the time to challenge
Eliezer in a serious way, because either
* they lack the technical chops and/or breadth of knowledge to do so,
and/or
* they lack the inclination (they don't find it entertaining, they don't
take Eliezer seriously enough to bother arguing with him, etc.)
> Eliezer has published many hundreds of pages on his FAI theory,
> certainly more detail than I've seen from any other AGI
> designer. What
> makes you think Eliezer/Michael have a "dislike for sharing
> their more detailed ideas"?
The fact that Eliezer has said so to me, in the past. He said he didn't
want to share the details of his ideas on AI because I or others might
use them in an unsafe way.
More recently, Michael Wilson has made comments to me in a similar vein.
-- Ben G
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT