From: Ben Goertzel (ben@goertzel.org)
Date: Thu Feb 26 2004 - 07:38:48 MST
RE: Intelligence is exploitative (RE: Zen singularity)
Hi,
I understand that your "singly-rooted goal hierarchy" refers to the goal system only.
But a big problem is that the primitives in terms of which the goal system is defined, are not really likely to be "primitive" -- they're more likely to be complex human-culture concepts formed from complex amalgams of other complex human-culture concepts ... not the sort of thing that's likely to remain stable in a self-modifying mind....
Of course, you can give the AI the goal of maintaining a simulacrum of the human understanding of these primitives, even as it transcends human understanding and sees the limitations and absurdities of human "primitive" concepts ... but I'm doubting that *this* is a stably achievable goal... for similar reasons...
-- Ben
-----Original Message-----
From: Christopher Healey [mailto:owner-sl4@sl4.org]On Behalf Of Christopher Healey
Sent: Wednesday, February 25, 2004 8:14 PM
To: sl4@sl4.org
Subject: RE: Intelligence is exploitative (RE: Zen singularity)
Hi Ben :)
Once again, I agree that at the current stage of both theory and experimentation, we are far away from beeing able to conclude very much.
I just want to be clear that by "singly-rooted goal hierarchy" I mean the goal-system only. The primitives from which the the goal system is represented would need to be systemically stable under that massively iterated self-modification. If this can be achieved, then the explicit representation should persist.
Of course, the complexity of mind required to support those primitives is understatedly, extremely high. But hopefully as we learn more about structuring those complexities, we can create a design that converges under operation, rather than diverges.
Kind of like starting with a soliton wave, and then adding active measures for rebalancing under exceptionally disruptive conditions. Err, perhaps not the best analogy, but I think it restates my point.
-Chris Healey
-----Original Message-----
From: owner-sl4@sl4.org on behalf of Ben Goertzel
Sent: Wed 2/25/2004 6:36 PM
To: sl4@sl4.org
Cc:
Subject: RE: Intelligence is exploitative (RE: Zen singularity)
...
My own feeling is that singly-rooted goal architectures are by nature
brittle and unlikely to survive repeated self-modification.
However, I realize that Eliezer and others have different intuitions on this
point.
Experimentation and mathematics will, in future, give us greater insight on
this point ... assuming we don't annihilate ourselves or bomb ourselves back
into the Stone Age first ;-)
-- Ben G
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:46 MDT