From: Mark Walker (mdwalker@quickclic.net)
Date: Fri Jun 28 2002 - 13:25:43 MDT
Pascal's wager, you'll recall, goes roughly like this. 1. If there is a God
and you are a believer, then up you go up to heaven. (Best outcome). 2. If
there is no God and you are a believer then little lost. 3. If there is no
God and you are not a believer then little is gained. 4. If there is a God
and you don't believe then down you go to hell to eat flaming shit. (Worst
outcome).
Ben Goetzel wrote:
> Over the last 15 years, I have chosen to focus my research work, and my
> writing, on the creation of real AI, rather than on the Friendliness
aspect
> specifically. This is not because I consider Friendliness unimportant.
It
> is, rather, because -- unlike Eliezer -- I think that we don't yet know
> enough about AGI to make a really detailed, meaningful analysis of the
> Friendly AI issue. I think it's good to think about it now, but it's
> premature to focus on it now. I think we will be able to develop a real
> theory of Friendly AI only after some experience playing around with
> infrahuman AGI's that have a lot more general intelligence than any
program
> now existing.
>
E.Y. thinks Friendliness first, B. G. thinks AGI first. Who is right?
Suppose we don't know. How should we act? Well either attempting to design
for Friendliness before AGI will be effective in raising the probability of
a good singularity or it will not. From best to worst are as follows:
1. We believe (and act as if) Friendliness should come first and it is true
that Friendliness should come first. (Best outcome).
2. We believe (and act as if) Friendliness should come first and it is false
that Friendliness should come first. (Slightly negative).
3. We believe (and act as if) Friendliness should not come first and it is
false that Friendliness should come first. (Slightly positive outcome).
4. We believe (and act as if) Friendliness should not come first and it is
true that Friendliness should come first. (Worst outcome).
1 is best because we have effectively raised the probability of a good
singularity. Of course we should not make light of 2 and 3. If it takes X
years to figure out Friendliness and it is inefficient to focus on it now
(as B.G. maintains) then a lot of time (= a lot of lives) could be wasted.
With 4 at best we squander our opportunity to raise the probability of a
good singularity, at worst we are responsible for a bad singularity
(unforeseen hard tack off). Thus, given our uncertainty and what is at
stake I think we should act as if Friendliness should come first.
Mark
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT