Re: [SL4] brainstorm: a new vision for uploading

From: Nick Hay (nickjhay@hotmail.com)
Date: Tue Aug 12 2003 - 18:38:46 MDT


king-yin yan wrote:
> I think Eliezer's vision of a single superAI is rather problematic. I
> think a diversity of specific-purpose AI's is the more likely scenario.
> The reasons are as follows:

Below, I've tried to address separately (1) whether a single FAI is a good
idea and (2) whether it more likely there'll be (in the near term) one AI far
superior to the rest, or a community of rough equals.

> 1. The definition of Friendliness is a political issue. There is no such
> thing as value-free, "objective" morality; the Friendly AI can only
> *inherit* the moral system of its creators and share-holders (if done
> right at all!) and the Friendly-AI-as-a-God-like-moral-figure is unsound.
> The debate about Friendliness will itself start a political war, rather
> than solve all political problems.

The political issues are the human differences, we share a great degree of
moral structure that is left unnoticed in political arguments. The structures
which allow us to reason about morality at all, particular forces that shape
what counts as a morality, are human universals. This is the target of
Friendly AI, what all (neurologically normal) humans share, not the surface
political arguments of any one group.

Because of this independence of the FAI from any programmer peculiar opinions
or morals is a central design aim. It shouldn't matter who made the FAI, as
long as they designed it with this independence in mind (and they succeed in
creating it!). Interim approximations will use the programmers beliefs,
probably, since they're the nearby humans, but the final result will not
depend on that - we don't want programmer dependance any more than anyone
else.

You're right. Analogies between Friendly AIs and anything human, especially
Gods, are unsound and misleading.

> 2. One may argue that superAI will be very powerful and everyone
> would want to be on "our" side. But this also is an unlikely scenario
> because it does not resolve the problem of *who* will have more
> power within our "party". Once again this would depend on the
> definition of Friendliness and thus start a war. (I'm actually quite
> pacifist by the way =))

The definiton of Friendliness is independent of any one human. We're all
members of the same species, and that accounts for most of our features.
However, the tiny individual differences between human are adaptively
important and are thus what we tend to see and argue about. We don't tend to
argue so much about the things all humans share which other animals lack.

There are no sides. If the FAI can be seen to be on a side at all ve's on
everyone's side, or humanity's side. This really isn't a political battle.

> 3. Safety. It is better to diversify the risk by building several AIs
> so in case one goes awry the others (perhaps many others) will be
> able to suppress it -- fault-tolerance of distributive systems. It
> seems the best way is to let a whole lot of people augment their
> intelligence via uploading or *cyborganization*.

Not necessarily. Firstly such an effort can only reduce independant risks -
there are failure modes in which all AIs fail. Given that as an aim, what's
the best distribution of effort between seperate AI projects? It's seems that
we shouldn't spread our effort thin and create a whole bunch of low-quality
AIs, but concentrate it on creating a high-quality AI.

The idea of policing a community against offending individuals is only
feasible among groups of equals. It seems unlikely that different AIs will be
equal unless the ones ahead held back to wait for the others to catch up. An
unFriendly AI is unlikely to slow down under such a situation, giving it a
head start.

Fault-tolerance can be built into a single mind itself. The distinction
between one mind and a group is especially clear in humans, who have a fixed
non-agglomerative amount of brainware, but less so in AIs. If needed, an AI
can take multiple points of view, with the final outcome depending on some
form of internal consensus.

> 4. The superAI is unfathomable (hence unpredictable) to us, so
> what's the difference between this and other techno-catastrophies?

The difference between Friendly AI and most other scenarios is that other
scenarios are predictably bad. Friendly AI is unpredictable, but only
unpredictable in the sene that we don't actually know what right will turn
out to be. We've already seen development of human morality in recent times
(equal rights independant of race and sex, seeing slavery as evil), we can't
know where we'd end up if we were actually getting more intelligent (our
brains have remained constant over recorded history) at the same time.

The most likely predictably bad scenarios I can think of:
* unFriendly AI - all dead, lay waste to galaxy and outward (possibly). Since
this involve greater than human intelligence this is unpredictable in
details, but the important moral consequences are predictable.
* nanotech war - all, or most, dead.

Of course I'm skipping over a whole lot of details here as in the rest of this
post.

> 5. Even if we have FAI, it probably will not stop some people from
> uploading themselves destructively (They have their rights). This will
> still create inequality between uploaders and those remaining
> flesh-and-blood.

But what does this 'inequality' matter? Just as they have the right to upload
themselves destructively, you have your right not to worry about being
harmed. As far as I can see, there is no race.

> Therefore the superAI scenario will not happen UNLESS there are
> some compelling reasons to build it. The fear is that destructive
> uploading will create too much of a first-move advantage to the
> effect that everyone would be compelled to follow suit immediately.

There are other dangers which have to been taking into account before
decisions are made. A significant one being existential risks - how can we
minimise them?

> So the goal should be clear: Create a technology for humans that
> would allow them to be on-par with uploads. And I think that
> answer would be: "personal AI". The PAI starts off like a baby
> and shares the users experience, like a dual existence. By the
> time cyborganization is available, the cyborganization process
> would be like merging with one's personal AI.

This is something to do when existential risks have been dealt with. What's to
stop someone creating an unFriendly AI from their personal AI? Or from
starting nanotechnological war? What stops present day suffering and death?
It's not a bad idea, it's just that I don't think this is a good first step.

> Thus, the rights to transhuman intelligence is distributed to all
> those who can afford it. If you think about it, that is probably
> the only sensible way to deal with computational power
> explosion... ie to create a broadly distributed balance-of-power.
>
> It doesn't matter that many people may not be techno-savy
> enough to use the AI -- that depends on user-friendliness and
> the best AI should be quite transparent and easy to use.
>
> Well, this still sounds very vague and difficult, but it's more
> plausible than the superAI scenario already (I think).

How's that? It'd be helpful to keep plausibility separate from desirability.

> One last problem that remains is poverty. I predict that
> some people will be maginalized from cyborganization, rather
> inevitable. Who am I to save humanity? We have to accept
> this and the next best thing is to maximize availability
> through education and perhaps redistribution of wealth,
> creation of more jobs etc.

As you might imagine this is only the tip of an iceberg. For futher details
I'd recommend reading the http://intelligence.org/ materials:
* http://intelligence.org/CFAI/ - Creating Friendly AI (in particular part 2 -
beyond anthropomorphism)
* the introduction documents on the Singularity there
* anything else that looks interesting

In particular this will clear up misunderstandings on what SIAI means by
creating a Friendly AI, and why we think it's an all-round good idea.

- Nick



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:42 MDT