From: Aubrey de Grey (firstname.lastname@example.org)
Date: Mon May 31 2004 - 10:08:28 MDT
Eli, many thanks for writing this extremely clear and thorough piece.
I will never think of the islets of Langerhans in quite the same way
>From your mention at many points in the essay of controlled shutdown,
it seems to me that you are gravitating rather rapidly to the position
that I instinctively have on FAI, which is that a true FAI will do very
little indeed in the way of altering our environment but will concern
itself strictly with pre-empting events that cause huge loss of life.
Any "interference" in more minor matters will be seen (by it, if not
in advance by its designers) as having drawbacks in terms of our wish
for collective self-determination that outweigh its benefits. [It may
go even further than that, of course -- e.g., it may decide that the
very presence of a super-human intelligence, i.e. it, detracts from
humanity's self-image so much that it shuts itself down and leaves us
to our own devices. But I digress.]
If we assume the above, the question that would seem to be epistatic to
all others is whether the risks to life inherent in attempting to build
a FAI (because one might build an unfriendly one) outweigh the benefits
that success would give in reducing other risks to life. So, what are
those benefits? -- how would the FAI actually pre-empt loss of life?
Browsing Nick Bostrom's essay on existential risks, and in particular
the "bangs" category, has confirmed my existing impression that bangs
involving human action are far more likely than ones only involving
human inaction (such as asteroid impacts). Hence, the FAI's job is to
stop humans from doing risky things. Here's where I get stuck: how
does the FAI have the physical (as opposed to the cognitive) ability
to do this? Surely only by advising other humans on what actions THEY
should take to stop the risky actions: any other method would involve
stopping us doing things without our agreeing on their riskiness, which
violates the self-determination criterion. But surely that is a big
gaping hole in the whole idea, because the humans who obtain the FAI's
advice can take it or leave it, just as Kennedy could take or leave the
advice he received during the Cuba missile crisis. The whole edifice
relies, surely, on people voting for people who respect the advice of
the FAI more than that of human advisors. That may well happen, but it
might not be very well publicised, to say the least.
What is wrong with this scenario?
Aubrey de Grey
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT