RE: Intelligence is exploitative (RE: Zen singularity)

From: Christopher Healey (CHealey@unicom-inc.com)
Date: Wed Feb 25 2004 - 18:14:10 MST


Hi Ben :)
 
Once again, I agree that at the current stage of both theory and experimentation, we are far away from beeing able to conclude very much.
 
I just want to be clear that by "singly-rooted goal hierarchy" I mean the goal-system only. The primitives from which the the goal system is represented would need to be systemically stable under that massively iterated self-modification. If this can be achieved, then the explicit representation should persist.
 
Of course, the complexity of mind required to support those primitives is understatedly, extremely high. But hopefully as we learn more about structuring those complexities, we can create a design that converges under operation, rather than diverges.
 
Kind of like starting with a soliton wave, and then adding active measures for rebalancing under exceptionally disruptive conditions. Err, perhaps not the best analogy, but I think it restates my point.
 
-Chris Healey
 
-----Original Message-----
From: owner-sl4@sl4.org on behalf of Ben Goertzel
Sent: Wed 2/25/2004 6:36 PM
To: sl4@sl4.org
Cc:
Subject: RE: Intelligence is exploitative (RE: Zen singularity)


        ...
        
        My own feeling is that singly-rooted goal architectures are by nature
        brittle and unlikely to survive repeated self-modification.
        
        However, I realize that Eliezer and others have different intuitions on this
        point.
        
        Experimentation and mathematics will, in future, give us greater insight on
        this point ... assuming we don't annihilate ourselves or bomb ourselves back
        into the Stone Age first ;-)
        
        -- Ben G
        
        





This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:46 MDT