Re: Volitional Morality and Action Judgement

From: Samantha Atkins (samantha@objectent.com)
Date: Tue May 25 2004 - 23:41:46 MDT


On May 23, 2004, at 3:38 PM, Eliezer Yudkowsky wrote:
>>> I was speaking of me *personally*, not an FAI. An FAI is *designed*
>>> to self-improve; I'm not. And ideally an FAI seed is nonsentient,
>>> so
>>> that there are no issues with death if restored from backup, or child
>>> abuse if improperly designed the first time through.
>> Funny, but we seem to have brains complex enough to self-improve
>> extragentically and to augment ourselves in various ways. We also
>> have
>> the brains (we think) to build the seed of more complicated minds than
>> our own. I don't see where we aren't designed to self-improve. The
>> AI will be designed to do it more easily of course.
>
> Having general intelligence sufficient unto the task of building a
> mind sufficient to self-improve is not the same as being able to
> happily plunge into tweaking your own source code. I think it might
> literally take considerably more caution to tweak yourself than it
> would take to build a Friendly AI, at least if you wanted to do it
> reliably. Unlike the case of building FAI there would be a nonzero
> chance of accidental success, but just because the chance is nonzero
> does not make it large.
>

I don't know about "happily" or "plunging" for that matter. But we are
gaining the ability to improve upon some of our own hardware and
"software". You yourself have written of such possibilities (although
mostly of a rather extreme subset) and of some level of such
self-improvement possibly being essential to solving the problems we
face including building a truly Friendly AI. I am not speaking of
"plunging" but of carefully attempting such improvements as we can,
many of them by use of external devices (computer augmentation)
originally. Some internal chemical and or gene-therapeutic
enhancements are also possible in the short-term with the former being
more immediately available and more tested. More is of course
possible as new technologies come online. It is possible for us to
ratchet ourselves up the intelligence curve for a while. I think we
very much need to do so.

> That we can self-improve "extragenetically" is simply not relevant;
> that is passing on cultural complexity which we *are* designed to do.

On the contrary, it is relevant as it brought us to the point of more
directly self-improving. It is a former of collective self-reflection
and self-modification.

> The other part of your analogy says, roughly speaking, human beings
> can (we hope) become FAI programmers, therefore, they can rewrite
> themselves. Leaving aside that this analogy simply might not work,
> it's a hell of a bar to become an FAI programmer, Samantha, it's one
> hell of a high bar. Most people aren't willing to put forth that kind
> of effort, and never mind the issue of innate intelligence. There is
> also a strictness and caution, which people are not willing to accept,
> again because it looks like work. Here I am, who would aspire to build
> an FAI, saying: "Yikes! Human self-improvement is way more dangerous
> than it looks! You've gotta learn a whole buncha stuff first." And
> lo the listeners reply, "But I wanna self-improve! Wanna do it now!"
> Which means they would go splat like chickens in a blender, same as
> would happen if they tried that kind of thinking for FAI.

You build up an image of yourself as thoughtful, intelligent, caring
and sane and of others who suggests different paths as being
irresponsible, relatively stupid and uncaring. This is getting very,
very old. If we wait around for the Eliezer-brain to figure
everything out although he can't explain it to anyone else (as he
believes he shouldn't if he could) then we will indeed go SPLAT!

>
> I am not saying that you will end up being stuck at your current level
> forever. I am saying that if you tried self-improvement without
> having an FAI around to veto your eager plans, you'd go splat. You
> shall write down your wishlist and lo the FAI shall say: "No, no, no,
> no, no, no, yes, no, no, no, no, no, no, no, no, no, yes, no, no, no,
> no, no." And yea you shall say: "Why?" And the FAI shall say:
> "Because."
>

There are so many levels of self-improvement. Many of them do not
require an FAI minder to successfully and fairly safely pursue.
Since you of late talk of your FAI not even being sentient I hardly
think it likely we will look to it for this much wisdom as what we
should and should not attempt.

> Someday you will be grown enough to take direct control of your own
> source code, when you are ready to dance with Nature pressing her
> knife directly against your throat. Today I don't think that most
> transhumanists even realize the knife is there. "Of course there'll
> be dangers," they say, "but no one will actually get hurt or anything;
> I wanna be a catgirl."
>

Sigh. People will get hurt attempting to transcend the current "human
condition". There is no doubt about that. But that does not mean we
should not try where the tradeoffs seem reasonable and where the price
of remaining as we are is quite high.

>> I do not see that it is ideal to have the FAI seed be nonsentient or
>> that this can be strictly guaranteed. I don't see how it can be
>> expected to understand sentients sufficiently without being or
>> becoming
>> sentient.
>
> If you don't know how *not* to build a child, how can you be ready to
> build
> one? Is it easier to design a pregnant woman than a condom? I am
> taking the challenges in their proper order.

Your ordering is made up and empty of any great significance. It makes
a poor counter to an argument that an FAI with the abilities you
advertise it as having is unlikely without sentience.

- samantha



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT