From: Ben Goertzel (ben@goertzel.org)
Date: Fri Feb 24 2006 - 04:04:23 MST
> SIAI is not proposing that the US government or the UN should decide how
> to design a Friendly AI. SIAI is not proposing that "we, as a society"
> should be thinking about how to build a Friendly AI. SIAI is trying to
> build a Friendly AI. Believe it or not, individual human beings are
> capable of thinking intelligently about ethics.
>
> > - ... we must conclude that the SAFEST thing to do is to rush into AI
> > and the Singularity blindly, without pause, before the Powers That Be
> > can control and divert it.
>
> I don't see how committing mass suicide is the safest thing to do.
Peter, two points:
1)
Eliezer has sometimes proposed that a Singularity not properly planned
with regard to Friendly AI is almost certain to lead to human
extinction. But this has not been convincingly argued for. He has
merely shown why this is a significant possibility.
2)
Phil is not really suggesting that rushing into the Singularity
blindly is the best possible option. He's merely suggesting that the
*better-in-principle* options are not very plausible, so that we
should focus on it because it's by far the highest-probability
plausible option.
As I understand it, a caricature of Phil's argument would go something like:
* If we launch a Singularity before the jerks in power figure out
what's up, we have a 50/50 or so chance of a good outcome (by the
Principle of Indifference, since what happens after the Singularity is
totally opaque to us lesser beings)
* If we don't launch a Singularity before the jerks in power figure
out what's up, we have a much lower chance of a good outcome, because
those jerks are likely to find some way to screw things up
* The truly better-in-principle approach to the Singularity would
require a long period of peaceful study and experimentation before
launching the Singularity: but this is just not feasible because once
the tech gets to a certain point, the jerks in power will pay people
to develop it quickly and in an unsafe way
Specialized to AGI, the argument would go something like:
-- making provably safe AGI is really hard and will take time X
-- for a dedicated maverick team to make AGI with unknown safety may
be easier, and will take time Y
-- after enough time has passed, some jerks will make unsafe and nasty
AI; this will take time Z
If
Y < Z < X
then it may be optimal to make AGI with unknown safety.
I am not putting this forth as my own argument, I am merely trying to
clarify the argument that was made as I understand it, since it seems
to have been misunderstood.
I do not think that you or anyone in the SIAI has ever presented a
convincing refutation of this argument. It is certainly not a
watertight argument but IMO it is at least equally plausible as the
SIAI perspective (which, as I understand it, holds that an AI not
strongly engineered for Friendliness will almost certainly be very
dangerous).
-- Ben
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT