From: Daniel Burfoot (firstname.lastname@example.org)
Date: Tue Nov 20 2007 - 23:26:36 MST
On Nov 21, 2007 2:20 PM, Thomas McCabe <email@example.com> wrote:
> > The question I'm seriously asking myself now is: should AI research be
> > put on hold until more political safeguards can be put in place?
> No. For that to have a reasonable chance of success, you would have to
> get competent transhumanists (if not professional AI researchers)
> writing the regulations, and the bureaucrats aren't going to let that
> happen. Otherwise, you just end up having to fill out meaningless AI
> Safety Permit Application Form #581,215,102.
Let me rephrase. As a reasonably moral person, or at least a person
who doesn't want to play into the hands of tyrants, should I give up
my AI research?
Or are we in an arms race against unspecified enemies, where the only
way to be sure that they won't get the superweapon first is to build
it ourselves, as fast as possible?
Note that "ourselves" is a deeply problematic notion here. I trust the
US government about as far as I can throw it.
It seems to me that FAI theory, to be successful, must also describe
ways in which to prevent dictators and other random idiots from
constructing non-Friendly AGI, once the theory of AGI becomes widely
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:00 MDT