Re: Hiding AI research from Bad People was Re: OpenCog Concerns

From: William Pearson (wil.pearson@gmail.com)
Date: Tue Mar 25 2008 - 09:38:34 MDT


On 25/03/2008, J. Andrew Rogers <andrew@ceruleansystems.com> wrote:
>
> On Mar 25, 2008, at 1:12 AM, William Pearson wrote:
> > On 25/03/2008, J. Andrew Rogers <andrew@ceruleansystems.com> wrote:
>
> >> Non sequitur. And in any case, I do not assume a FOOM scenario;
> >> indeed, I have a history of arguing quite the opposite. It does not
> >> invalidate my original point, which was that nobody will be paying
> >> attention until such a time that tripping over it is unavoidable
> >> unless you put a lot of effort toward attracting attention. This
> >> argument is not dependent on the speed of the scenario, though
> >> outcome
> >> distributions do vary a bit with scenario speed.
> >>
> > I'm missing something then, what do you think will make the
> > government, "a day late" if someone apart from them creates AI? A day
> > behind perhaps, but if you aren't expecting a discontinuity, then what
> > exactly makes them late?
>
>
>
> To put it simply, it is a Red Queen's Race, and for a variety of
> technical reasons no one is likely to catch you after a certain point
> is crossed in development that should occur pretty early on and/or
> under the noise floor of notice.

But they don't have to try and catch you in a fair technological
developmental race, they can use lots of other means to slow your
efforts (legal, grey ops or black ops).

Or do you consider these unlikely as well?

With regards to technological races, some systems might be heavily
dependent upon long term training to test and perfect, these might
also benefit from a excess amount of training power that the
government or other large body can supply (as well as compute).

  Will Pearson



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT