Re: AI timeframes

From: J. Andrew Rogers (andrew@ceruleansystems.com)
Date: Fri Apr 09 2004 - 00:21:18 MDT


On Apr 8, 2004, at 7:46 PM, Elias Sinderson wrote:
> Have you fully considered what the US government would most likely do
> (read: military applications) with the successful result(s) of such a
> project? Note that I wouldn't feel any more more secure if the project
> were to be entirely privately funded. Seriously considering the
> outcome of either of the above situations should only serve to
> underscore the importance of 'friendliness'being part of the overall
> equation.

The development of Strong AI brings out the control freak in most
people; most are not comfortable with almost anyone else being in
control of it. The problem is that "controlling" a fundamentally cheap
technology like AI means that the person who wins the race is likely
the person who managed to avoid the controls.

IMO, one shouldn't spend too much time dwelling on who develops it
first. Not only will you have little control over it, but it probably
won't have *that* big of an impact on the final outcome.

> IMHO, the best way to guarantee this being the case is by assuring
> transparency of the research efforts coupled with independent
> oversight (perhaps by an international body, although I find it
> completely unlikely that this would be accepted by the US government).

Nope, definitely not. Nothing ever progressed well or fast with
bureaucratic oversight of any type.

All this transparency and oversight means is that some other private or
governmental organization will run with the technology ball while the
developing research organization is burning time and resources dealing
with bureaucracy.

To a certain extent, whoever gets there first wins. Period. Anything
that hinders the process and lets others tinker with the internals
creates an opportunity for other private organizations to quietly
emerge which have no such hinderances. Worse, there is a LOT of
motivation for people involved in both the oversight AND primary
research team to defect in this manner, and the likely motivations of
the defectors would make them precisely the people you don't want doing
this. Bad, bad, bad.

The best bet is probably relatively secretive private research. It
minimizes the pool of defectors and minimizes the risk of defection to
the extent that such risks can be minimized. Outside individuals won't
have much input on the process, but then I see very little good that
can come of that unless one takes an improbably optimistic perspective.

j. andrew rogers



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:46 MDT