From: James Higgins (email@example.com)
Date: Sat Jun 30 2001 - 13:20:48 MDT
This is by far the *most* dangerous suggestion I think I've read on this
list. See below.
At 02:13 PM 6/30/2001 -0400, Jack Richardson wrote:
>Thanks for your response to my message.
>The idea I'm presenting for consideration is the importance of human
>participation in the ascent towards the Singularity. I'm suggesting that
>there may be an alternative to building the AI while we remain just as we
>One issue here is whether the ascent can be sufficiently controlled so that
>we can follow as best we can what is going on. One way to do this is to have
>the AI itself have as a part of its architecture not just the communication
>of what changes it is making to its code, but providing a
>human-understandable explanation as to what it is doing and what it expects
>the next steps will be. One control might be that it would have to get a
>positive response from its programmers that they have sufficient
>understanding before it could proceed to the next iteration of its
What is to prevent the AI from misleading its inspectors in this
regard? If you've done much programming you know how dangerous inaccurate
source code comments are. The comment says "The following block does X
because of Y", which is exactly what the programmer reviewing/modifying the
code wants. So they spend countless time trying to figure out why it
doesn't work. Eventually they figure out that the actual code doesn't
match the comment as they whack themselves in the head for missing
Granting the AI an ability to explain its code in English makes it vastly
more likely that it could fool the programmers.
>Another area for consideration is whether the AI in its advanced but still
>pre-singularity mode could make recommendations as to what enhancements to
>human intellectual capacity would work best to allow us to follow it further
>on its path. The advanced pre-singularity phase is a key point where a lot
>of useful things might be done, and I'm suggesting that the architecture of
>the AI be designed in a way that we could all get the benefit of this phase.
Having an AI suggest modifications for the minds of humans is
dangerous. Even more so when it is for the purpose of better understanding
the AI. Who needs SI magic when the AI is asked to modify the minds of its
programmers. Without the ability to prove that the AI is in fact friendly,
this is most certainly a recipe for disaster.
>It is certainly true that after the Singularity occurs all bets are off.
>This is why controlling the advanced phase is crucial. If we don't have near
>total confidence in the friendliness of the AI, we should never let it get
>beyond this point. Being able to extensively interact with it during this
>phase, even with the risk that someone else might create one first, is
>necessary to establish that confidence.
How could we have any confidence in anything after letting the AI explain
itself to us, much less modify our minds? Familiar with Keepers from
Babylon 5? I could just imagine screaming internally "no, this is wrong,
it is not friendly", as a modified or extended part of my mind gives the
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT