RE: Threats to the Singularity.

From: James Higgins (jameshiggins@earthlink.net)
Date: Sun Jun 23 2002 - 13:43:46 MDT


At 12:18 PM 6/23/2002 -0600, you wrote:
>Hi james,
>
> > Well, we would hope that the team who created this AI doesn't give it
> > access to the global network! Or, if it does, it would be so highly
> > restrictive as to (at least for the near term, pre-super intelligence)
> > prevent such uncontrolled expansion. However, they could conceivably be
> > ignorant enough to grant such access or have a flaw in their security.
>
>This is a tricky issue. The global network is an important source of info
>for any AI in learning phase. One wants to give one's young AI net access
>via a kind of "reverse firewall" that allows it to gather data but not to
>cause damage.

Right, that was my point. It would have a highly controlled connection the
Internet. In today's terms it would have HTTP access to Get most
things. Post access would be removed or controlled. No other protocols
would be allowed. And, preferably, the content of the HTTP traffic would
be parsed to provide further filtration. While it is conceivably be
possible for an AI to extend itself out to the Internet via this
connection, it would be very difficult. It would probably take significant
effort and research to do so, which could be spotted (hopefully) by the
nature of the information being retrieved. However, when the AI got
significantly more Intelligent than Humans the chance of escape would
obviously go up.

> > >co-evolutionary competition and population pressure the AIs will
> > very soon
> > >start designing and building new hardware, which allows them to become
> >
> > And just how would they make the leap from running on silicon to building
> > silicon? I'm almost certain that there is no capacity to do this
> > today. They would have to be able to perform 100% of the
> > manufacturing and
> > assembly operation completely via computer, assemble the working
> > technology
> > and connect it to the net. All without human intervention. Even
> > if there
> > was a facility which had all of this capability computer
> > controlled (which
> > I don't believe is the case, much of it is manual - moving pieces between
> > manufacturing workstations, etc) the operators would have to sit there
> > while the machines spent hours (days?) "doing their own thing".
>
>Your point is an excellent one. However, there are plenty of scenarios one
>can conjure to counter it.
>
>For instance, suppose the AI finds a way to threaten a lot of people with
>death, and then basically *blackmails* humans into creating a fully
>automated computer-and-robot-manufacturing facility for it....

True, but it would take considerable time to construct such a facility. I
know it takes one major chip manufacturer about 2 years to construct their
chip fabs (plus 10+ billion dollars). Blackmailing to this degree would be
very, very difficult. Oh, and the chip fab is not an end-to-end facility,
obviously. Assuming that the needed facility was an order of magnitude
smaller/cheaper than the mentioned chip fab it would still be very
difficult to blackmail for.

>Or, more probably, suppose it finds some group of humans and promises them
>lots of goodies if they build it the right automated manufacturing
>facilities.... it's almost inconceivable that an AGI, capable of predicting
>financial markets and hence getting lots of $$, couldn't find *some* group
>of humans to build it whatever it wanted for cash payment...

True, and while more plausible than blackmail it would still take
significant time (measured in months).

>I can see a future gov't wasting a lot of effort protecting against
>military-style attacks by AGI, and then finding that an AGI actually takes
>over the world via financial & political machinations...

Right, I don't think governments will play much (if any) role in the coming
Singularity.

James Higgins



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT