Re: Threats to the Singularity.

From: Gordon Worley (redbird@rbisland.cx)
Date: Mon Jun 17 2002 - 22:11:16 MDT


On Monday, June 17, 2002, at 10:54 PM, Samantha Atkins wrote:

> Gordon Worley wrote:
>
>> On Monday, June 17, 2002, at 06:51 PM, Samantha Atkins wrote:
>
>> My core looks something like this. I want to make the universe a
>> better place. A better place to live. A place that solves new,
>> interesting problems. A place that I'd like to stay, but wouldn't
>> want to visit.
>
> Thanks for offering it. Now, if we could just get a bit more handle on
> what you/we mean by "hetter" we could move right along.

Well, this is the part that's fuzzy for me. I have some idea of what
would be better, but it's not all that clear. I can't think of too many
specific things that I could write down. For most of us, I think that
the `better' metric arises from the things we enjoy. So long as those
programming the Seed AI aren't serial killers or the like, everything
should turn out Friendly.

>> I'd like for this to include me in it, but if it turns out that the
>> universe can't be better so long as I'm still in it, then I'll get
>> out. I think the "yuck factor" in this is that I think the same way
>> about everything. If you're making the universe a worse place, I
>> don't really want you in it. I hope that getting "you" out of it only
>> involves convincing you not to do whatever it is that is making the
>> universe worse.
>
> Doesn't this assume that you and perhaps other entities are relatively
> immutable over time no matter what your wishes? I don't consider that
> a particularly tenable notion assuming an ability to upload and/or
> continuously augment, self-examine and change. I would also challenge
> any being to conclusively prove another being both of more harm than
> good to the universe AND utterly incapable of ever changing.

If one can change, that's the preferable solution. If, however, a being
refuses to be fixed, an SI would need to do something with ver (which
may include killing ver, if ve refuse to be put in a VR).

>
>> I think this seems yucky because this sounds just like the kind of
>> thing Hitler would say. The difference is that I have compassion for
>> all life. I want to see the universe better and would like for that
>> to include everyone and everything in it. However, if all attempts at
>> this proves impossible, I'm not going to say "well, okay, I guess the
>> universe is just going to suck", but "okay, let's see what the
>> limiting factors are and what we have to do to get around them".
>
>
> I don't believe in an "improved universe" by eradicating sentients that
> seem problematic except in very very limited circumstances where an
> entity is capable of such destruction not stoppable otherwise as to
> force the decision. I don't consider not being as productive as somme
> might like to be a reasonable criteria. I don't consider not being as
> rational or intelligent such as criteria either. If your goal is
> maximizing life then I don't think you see these as criteria for
> extermination either.

Extermination is hardly something that I plan on anyone doing everyday,
if ever. Getting back to the original post, the point is not whether
you'll actually execute someone, but whether you are unattached enough
to see it as a possibility that you may have to act on.

>>>>>> Some of us, myself included, see the creation of SI as important
>>>>>> enough to be more important than humanity's continuation. Human
>>>>>> beings, being
>>>>>
>>>>> How do you come to this conclusion? What makes the SI worth more
>>>>> than all of humanity? That it can outperform them on some types of
>>>>> computation? Is computational complexity and speed the sole
>>>>> measure of whether sentient beings have the right to continued
>>>>> existence? Can you really give a moral justification or a rational
>>>>> one for this?
>>>>
>>>> In many ways, humans are just over the threshold of intelligence.
>>>
>>> Whose threshold? By what standards? Established and verified as the
>>> standards of value how?
>> You're asking for a definition of intelligence. Though question!
>
> A tougher one is why intelligence is on top of your value stack as the
> measure of the worthiness of various beings to exist and as the
> principle measure of "better".

Oops, I went on to answer the immediate question while ignoring the
question that spawned the last question. A wiser being would be better
at making the universe better.

>> At any level of intelligence, though, all the problems at the limits
>> of solvability look interesting. As great as we think we are, we can
>> already see that there are some interesting problems out there that we
>> can't find solutions to (like the halting problem).
>
> The halting problem is provably unsolvable by any and all levels of
> intelligence.

Well, that is under our theories of mathematics. Using different
theories, it may turn out to be solvable (and then we'll wonder how we
missed the answer all those years), but those theories may simple be
beyond what the human mind can comprehend. ;-)

>> And, unless it turns out that intelligence doesn't scale very well,
>> the trend tells us that even more interesting questions are out there
>> for more intelligent minds to solve. I doubt that anything will ever
>> be so intelligent that it will be able to solve every problem.
>
> Sure, but so? Is this all there is? Is it criteria enough for what is
> and is not of value?

Getting back to the original question that spawned this subthread, an SI
should be more capable at making the universe better than a human. So
long as the SI is wiser than humans, it has more `right' than some
humans to use some amount of matter.

>> One should be humble, but not negative. Being negative is just as
>> irrational as flattery.
>> Much has humans get to clear away ants if they're keeping the universe
>> from getting better, an SI could clear away some humans if they got in
>> the way. If the SI is compassionate, ve will see that the humans are
>> doing some good and, being self aware, are able to change themselves
>> to do more good. Unlike the humans who is unable to solve the ant
>> problem by any means other than getting the ants out of the way (be
>> that killing them or displacing them), an SI can solve the human
>> problem by helping the humans.
>
> Ants, while they may be inconvenient at a picnic or marching across the
> kitchen, are not in the way fo the universe getting better. People are
> likely to be even less so. I agree of course that there are much
> better solutions in the case of humans than extermination.

You never know, those ants might be preventing Ben or Eliezer from
getting Seed AI programming done. Besides, ants are just an example;
insert any kind of obstruction you want here that is clearly lesser than
human level intelligence.

> From your earlier post, at what point would you not battle for an SI in
> the process of being born? Suppose you had super weapons capable of
> laying waste to entire nations of opposition. Would you use them?

This all depends on the situation. In most cases the project would be
better of cutting its losses and trying again, since killing lots of
people tends to get other people even more upset. I would not use the
weapons unless extermination was really, really, really the only option.

>> If some humans prove to be beyond help, though, I don't think it's
>> totally wrong to clear them out in some way. Maybe that just means
>> letting them live in a simulation where they can kill their virtual
>> selves. I'll leave the solution up to a much more intelligent SI.
>
> Define "beyond help". I would support popping the ones that were too
> great a danger to self and others into a safety zone of some kind (VR
> or otherwise) until they learn better and/or can be cured in a way they
> are willing to undergo. I don't think we should leave it up to the
> not-yet-existent SI now though. It is these kinds of questions and the
> answer to them that will make the difference in the level of support
> and vilification.

Yeah, I guess this is the kind of thing that people want answers to.

Well, VR is a good choice. But, what if an Evil SI surfaces. In that
case it might very well be a better choice to kill it rather than try to
fix it. I don't think we know enough about what kinds of situations
will arise to know what would be good options.

>>>> But, it's not nearly so simple. All of us would probably agree that
>>>> given the choice between saving one of two lives, we would choose to
>>>> save the person who is most important to the completion of our
>>>> goals, be that reproduction, having fun, or creating the
>>>> Singularity. In the same light, if a mob is about to come in to
>>>> destroy the SI just before it takes off and there is no way to stop
>>>> them other than killing them, you have on one hand the life of the
>>>> SI that is already more intelligent than the members of the mob and
>>>> will continue to get more intelligent, and on the other the life of
>>>> 100 or so humans. Given such a choice, I pick the SI.
>>>
>>>
>>> But that is not the context of the question. The context is whether
>>> the increased well-being and possibilities of existing sentients,
>>> regardless of their relative current intelligence, is a high and
>>> central value. If it is not then I hardly see how such an SI can be
>>> described as "Friendly".
>> To a Friendly intelligence, this is important.
>
>
> OK. And that is what you wish to build, right? :-)

Yes.

--
Gordon Worley                     `When I use a word,' Humpty Dumpty
http://www.rbisland.cx/            said, `it means just what I choose
redbird@rbisland.cx                it to mean--neither more nor less.'
PGP:  0xBBD3B003                                  --Lewis Carroll


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT