From: Jef Allbright (jef@jefallbright.net)
Date: Tue Feb 22 2005 - 11:08:04 MST
Ben Goertzel wrote:
>Hi all,
>
>Spurred on by recent discussions & some conversations w/ my wife, I've had
>some new thoughts on our familiar topics (Friendly AI, etc.).
>
>See this document,
>
>"Encouraging a Positive Transcension via Incrementally Self-Improving,
>Theorem-Proving-Based AI"
>
>http://www.realai.net/ITSSIM.pdf
>
>  
>
Ben -
Some comments on your ITSSIM:
It seems to have inherited some resemblance to our discussion during 
Jan-Feb 2004.
For example:
http://sl4.org/archive/0401/7798.html
http://sl4.org/archive/0401/7800.html
In particular, your examples of the man who is uploaded, or the dog that 
has its intelligence enhanced, corresponds to my emphasis on expanded 
scope of awareness as a key element of an effective ethical theory for 
our future. I use the common example of a child being angry with a 
parent, while the parent, having a greater scope of awareness, is making 
better choices for the good of the child. I further abstract this 
concept of scope of awareness to include /groups /of individuals, 
sharing a set of common goals, sharing their perceptions and knowledge 
of their world. In moral reasoning it is essential that the definition 
of Self not be limited to a physical body, but be that with which one 
subjectively identifies.
Regarding the idea of incremental theorem-proving, this appears to be 
congruent with my saying:
    "It seems to me that the ultimate measure of an ethical system is
    consistency. Within nested context levels, the consistency of ethical
    principles should increase as the context becomes broader."
Three key observations from your paper:
   1. You mention there are two costs in the ITSSIM approach. I agree
      there is a cost, an economic factor which is a good indication of
      a theory grounded in reality, but I think your two costs are
      actually two different views of the same cost of evaluating the
      moral consistency of the direction of progress.
   2. There is an inherent subjective relationship that is implied with
      your smaller, less complex system evaluating the progress of the
      larger, more complex system, and you point out the conundrum of
      computing resources this would require. I will suggest a way of
      un-asking how this should be handled.
   3. You say "In any case, the creators of the initial AI would have no
      guarantee of what the long-term result would be – all we’d know is
      that, each step of the way, the change would seem like a good idea
      based on extremely rigorous reasoning." I strongly agree with
      this, but would redefine "based on extremely rigorous reasoning"
      to "based on principles of effective interaction", with the
      implication that the process continues to evolve within a larger
      environment rather than having a fixed mathematical basis.
Now, here is where my thinking appears to diverge with yours and most on 
this list:
We, as subsystems of human society and beyond, are part of a 
self-improving system. We incorporate our values, an approximation of 
the values we would wish to hold given greater scope of awareness, in 
our interactions with Other, and in the process we grow in alignment 
with our values. Subjectively, we see as good "that which works", and 
this drives the direction of improvement. Objectively, interaction 
between Self and Other is more effective with increased scope of 
awareness, and with application of principles of synergetic cooperation. 
And thus, recursively, the larger system grows and we influence our 
growth within it.
Rather than looking inward, building a singleton AI in an attempt to 
model the chaotic dynamical system of which we are a part and influence 
its direction from a central position as if just the right force and 
vector could be calculated, we should be looking outward, building 
intelligent tools to improve our perception and knowledge of our world, 
from a broad base that already embodies human values, constrained by the 
economics of what works, and therefore on the path to good.
Some say "We don't have time to help humanity save itself; we have to 
save the world via our focused brilliance." The only effective response 
I know of is to highlight the importance of context, and nested scopes 
of awareness.
- Jef
http://www.jefallbright.net
This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:22:53 MST