From: Aubrey de Grey (ag24@gen.cam.ac.uk)
Date: Wed May 19 2004 - 12:36:00 MDT
Chris Healey wrote:
> FAI seems more a discipline of risk mediation than anything else.
...
> The problem is still there, of course. ... Don't knowingly cede
> predictability!
Hm, I was worried that this might be basically all there is to it. It
seems to me that the option of preventing any full-blown AI from ever
emerging (as opposed to simpler semi-autonomous entities restricted to
manufacturing/mining/etc applications, against which I see no argument
but which should need no self-modifying emergent-complexity component)
is not as futile as all that. As with WMDs, the most pragmatic tactic
seems to be to minimise people's desire to build these machines in the
first place, both by education of the dangers inherent in them and by
making the benefits of developing them less desirable. I think of my
own work on life extension as a substantive contributor to the latter,
in the sense that with indefinite lifespans we won't be in such a rush
all the time, so it'll be OK if our machines take a while to do things.
Aubrey de Grey
This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:22:36 MST