From: Eliezer S. Yudkowsky (firstname.lastname@example.org)
Date: Sat Nov 18 2000 - 16:50:34 MST
Ben Goertzel wrote:
> > I am not sure I agree with your exact phrasing. I would rephrase
> > as follows:
> > 1) Once an AI system is smart enough to restructure all the matter of the
> > Solar System into vis own mind-stuff, we will not be able to guide vis
> > development *if ve doesn't want us to*.
> Even if it wants us to, we may not have the intelligence to understand how
> its mind works well enough to guide it...
> How much can a dog guide YOUR development?
Not a difficult problem to discuss - let's see...
One mental image that comes to mind on reading your question is that the
superintelligent AI, confronting a question of internal design, says: "Hey,
programmer, should I vesper my speems?" And the programmer can only shrug,
because ve doesn't know what the heck a speem is.
A) If speem-vespering is a strictly internal design question, however, then
it's hard to see why the AI would need the help - just choose the path of
greater efficiency, reliability, speed, coverage, et cetera.
B) If speem-vespering has humanly understandable moral consequences - if, for
example, speem-vespering is needed to stop the Solar System from turning into
a black hole - then the programmer should be able to say: "Yes, vesper the
speems, please." (And in a simple, unambiguous case like that just discussed,
I'd hope the AI would know enough to decide without needing to ask the
C) If speem-vespering is a moral question that has no humanly understandable
consequences, then the most we can hope for is that the AI will make the same
decision that would be made by a transhuman - that is, someone who started out
as a sane, reasonably moral human, and upgraded, increasing the complexity and
depth of ver moral philosophy along with vis intelligence. And we do this by
grokking, or asking the AI to grok, the way in which humans grow their own
moral philosophies; so that as the AI grows, it can grow its own moral
philosophy as well, preserving the original domain of Friendliness and
extending it to cover new questions.
D) If speem-vespering is a moral question that has no humanly understandable
component and which is entirely orthogonal to the entire realm of human moral
experience, so that nothing currently in our minds could have the slightest
bearing on speems one way on the other, then - by definition - neither you nor
I *care* what decision the AI makes, and the problem is one for the community
of future transhumans and the Sysop, not Friendly AI programmers in the
It is scenario C, of course, that I find most exciting, as it captures the
fundamental challenge of "seed morality".
-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT