From: Eliezer Yudkowsky (sentience@pobox.com)
Date: Fri Jun 04 2004 - 12:46:41 MDT
Ben Goertzel wrote:
>
> No, it's also based on statements such as (paraphrases, not quotes)
>
> "I understand both AI and the Singularity much better than anyone else
> in the world"
No, just much better than anyone whose opinions on the subject have been
written up and made their way in front of my eyeballs.
> "I don't need to ever study in a university, because I understand what's
> important better than all the professors anyway."
False.
> "A scientist who doesn;t accept SIAI's theory is not a good scientist"
I repudiate this.
> "SIAI's theories are on a par with Einstein's General Relativity Theory"
I repudiate this.
> Eli seems to have a liking for James Rogers' approach, but has been
> extremely dismissive toward every other AI approach I've seen him talk
> about.
There are a hundred mistakes for every correct answer. I'm very fond of
many scientific fields, but not many popular contemporary approaches to AI.
I like Marcus Hutter's work, and I have a sentimental fondness for
Douglas Lenat's old work with Eurisko. But mostly I reserve my fondnesses
for fields other than AI. I don't think much of many *AGI* approaches out
there; specific tools such as neural networks, Bayesian belief networks,
and so on, may work just fine.
> The notion of your "volition" as Eliezer now proposes it is NOT
> necessarily aligned with your desires or your will at this moment.
>
> Rather, it's an AI's estimate of what you WOULD want at this moment, if
> you were a better person according to your own standards of goodness.
>
> Tricky notion, no?
Yep, that it is.
> The idea, it seems, is to allow me to grow and change and learn to the
> extent that the AI estimates I will, in future, want my past self to be
> allowed to do.
>
> In other words, the AI is supposed to treat me like a child, and
> estimate what the adult-me is eventually going to want the child-me to
> have been allowed to do.
A powerful analogy, but a dangerous one. Human children are designed by
natural selection to have parents; human adults are not.
> In raising my kids, I use this method sometimes, but more often I let
> the children do what they presently desire rather than what I think
> their future selves will want their past selves to have done.
>
> I think that, as a first principle, sentient beings should be allowed
> their free choice, and issues of "collective volition" and such should
> only enter into the picture in order to resolve conflicts between
> different sentience's free choices.
But what if you are horrified by the consequences of this choice in 30
years? How much thought did you put into this before deciding to make it
the eternal law of the human species? Are you so confident in the power of
your moral reasoning, oh modest Ben?
> I much prefer to embody an AI with "respect choices of sentient beings
> whenever possible" as a core value.
>
> Concrete choice, not estimated volition.
>
> This is a major ethical choice, on which Eliezer and I appear to
> currently significantly differ.
Actually, I think the key point of our difference is in how to make the
meta-decision between concrete choice and extrapolated volition. I don't
dare make the concrete choice myself, so I turn it over to extrapolated
volition. (I arrogantly try to reduce my probability of massively screwing
up, rather than humbly submitting to the sacred unknown.) I certainly hope
that our extrapolated volition is to respect concrete choice, and if I were
a Last Judge and I peeked and I saw concrete choice extensively violated
and there wasn't a good reason, I'd veto.
> The fact that Eliezer has said so to me, in the past. He said he didn't
> want to share the details of his ideas on AI because I or others might
> use them in an unsafe way.
I think I may have said something along the lines of, "There is no sane
reason to discuss AGI with you until I have finished discussing Friendly
AI, since you need to know the FAI stuff anyway, and if you can't get FAI
you probably can't get the AGI either."
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:22:39 MST