From: James Higgins (jameshiggins@earthlink.net)
Date: Sun Jun 23 2002 - 21:53:25 MDT
At 09:56 PM 6/23/2002 -0700, you wrote:
>James Higgins wrote:
> >
> > I disagree. If the goal is to protect yourself from humans then becoming
> > exceptionally powerful, on their terms, is a good answer. Especially if
> > doing so is a relatively easy task.
> >
>
>If the goal of the SI is 'to protect yourself from humans', then we will
>have already lost. If the SI is built with, or acquires, an adversarial
>attitude toward humans, then we will be toast. My suggested alternatives
>(usefulness, friendliness, entertainment-source) were all intended to be
>used by a non-adversarial SI, and to have a good chance of gaining
>significant cooperation from humans.
>
>While I agree that it is very likely any SI would quickly become the most
>powerful entity on this planet, the source of that power will probably not
>come from any existing financial or political base. Why? Because the
>existing infrastructure is so incredibly cumbersome and inefficient and
>slow, there would be no point in 'taking over'. The problems facing a newly
>minted SI are unlikely to be bottlenecked on resources, because higher
>intelligence can do an awful lot with very little (relatively speaking of
>course).
>
>Michael Roy Ames
Even if the AI was friendly it may very well need to protect itself. Most
people would not understand it and humans do strange and sometimes terrible
things to what they do not understand. Thus a FAI in its early stages may
deem these steps desirable at some point. Notice that I said in my
previous post that the AI could do substantial good for humanity via taking
financial control. This may also help if fulfill some of its friendliness
goals, while protecting itself.
James Higgins
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT