From: James Higgins (jameshiggins@earthlink.net)
Date: Thu Jun 27 2002 - 02:50:28 MDT
At 01:14 AM 6/27/2002 -0500, Stephen Reed wrote:
>On Wed, 26 Jun 2002, Ben Goertzel wrote:
>
> > I don't doubt that there are many trustworthy, moral individuals within the
> > US military. However, I do not trust the US military as an organization,
> > no way Jose'. This is the crew that invaded Grenada... the organization
> > that nuked Japan (probably both bombs were needless, but it's pretty damn
> > clear the second one was), that systematically tortured Vietnamese
> women and
> > children in the name of democracy and justice.... I'll spare you a full
> > list of examples. Ever read the first-person account of how US soldiers
> > chopped off the arms and legs of a Vietnamese woman, inserted dynamite in
> > her vagina and blew her up? Not an outlier occurence. Excuse me if the
> > impeccable morality of the US military seems a little questionable to me...
>
>You strengthen my point. The US military has evolved ethics because of the
>terrible power that they are entrusted to use. Why else have the strict
>honor system at military schools? Furthermore, knowledge that individual
Um, because a strict honor system is part of the conditioning used to get
military personnel to always follow orders? Think that may have something
to do with it? And, well, of course the US Military would prefer not to
see the huge ethical problems. But following orders comes first.
>soldiers are capable of war crimes only encourages future military
>planners to substitute an AGI for a soldier (or augment/monitor a
>soldier) in that situation. According to my understanding of CFAI, the
>AGI should be resistant to criminal behavior due to a deep knowledge of
>ethics. From which follows my conclusion that an AGI would not follow an
>illegal military order.
Yes, a deep knowledge of ethics intended to prevent the AI from killing
people. In such a scenario your likely to get a HAL. Due to
irreconcilable differences in its morality/ethics it could end up doing
virtually anything. We're seriously worried if Friendliness can be applied
at all, much less a lobotomized version that says its ok to kill people in
groups A, B, C or D but not people in groups E or F.
And some people really wonder why I'd like to see personal space travel
before virtually anything else. Because I want to get the hell out of
here, ASAP.
>More ethics info regarding my recent notion that military ethics may serve
>to educate an AGI can be found at a military web site and its links list:
>
>http://www.usna.edu/Ethics/
>
> > The Singularity is directly opposed to the national interests of any
> > particular national government, because it will almost inevitably lead to a
> > "revolution" that will make national boundaries meaningless.
>
>Well for me the Singularity is something that I cannot see beyond, so I
>will respond to your statement as though I agree with the premise and
>offer this counterexample of the second statement in your argument:
>
>Recent history shows that national governments will surrender sovereignty
>for a greater good. Witness the EU and NAFTA.
I didn't see anyone surrender any SOVERENTY. They traded a little power
from column A (control) for power from column B (financial).
> > Thus, no government should be trusted to play a leading role in the
> > Singularity.
>
>Not only do I have the opposite opinion, I believe that as the evidence
>mounts that an AGI is possible, the government - and in particular the US
>government will take the leading role. I believe this from my
>understanding of the dynamics of how our government institutions have
>responded to the great technical challenges over my lifetime - The Space
>Race, the War On Cancer, Safe Cars, Solar Energy, The AIDS Epidemic ...
>I do not see any non-governmental organization as the front-runner. The
>actual work of developing and educating the Seed AI would in my opinion be
>performed by private contracting companies - somewhat in competition with
>each other.
Lovely, lets have the AI educated by the lowest bidder. Not to mention
that having multiple entities "educating" it would likely end up producing
conflicting goals and external references. Can't you see how this whole
process could easily break the AI and, as a result, quite possible destroy
the human race?
> > I might accept gov't funding for AGI research, but I would never willingly
> > place control of an AGI in the hands of any military organization. That
> > really scares me. Those people are not Singularity-savvy, and I don't
> trust
> > they will become so in the future, not in any healthy way. Plus, their
> > interests are not those of the human race as a whole, let alone of
> sentience
> > as a whole.
>
>My belief that the funding will be from the defense budget stems from my
>conclusion that an AGI would be priceless for our nation's defense, and
>knowing this the military will fund before other government institutions.
>
>My comfort with military research organization (e.g. Darpa) leading and
>coordinating this effort is from my own experience and I am not going to
>persuade anyone else (lacking to skills to do so).
This has nothing to do with comfort. No one (well, maybe some will), ok
I'm not saying DARPA is a bad organization. Quite the contrary, they've
done great things. But ANY government, and yes that includes the US
Government, is the wrong entity to create or control the
Singularity. Attempting to put any constraints on Friendly AI development
much beyond being friendly to all humans is virtually impossible. Yet
Governemnts would have to try and do just that, they would attempt to force
their will and viewpoint on the AI. Simply the act of doing this could
unravel the whole thing and, thus, we loose complete control of the AI.
Go back and read some of the much earlier posts. There is NO WAY to
control or stop an AI which is significantly more intelligent than
humans. Which means if you screw it up we're all toast. Not even DARPA
can get around this. The project either needs to be done as COMPLETELY
Friendly or not at all. A completely friendly AI would be useless to
governments and they certainly wouldn't not try.
James Higgins
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT