From: Joel Pitt (joel.pitt@gmail.com)
Date: Thu Jul 14 2005 - 01:13:43 MDT
> I think in order to establish your position, you need to identify
> what, if any, ultimate goals will lead to the most intelligent
> configurations.
I think that a goal of fulfilling the goals of all other existing
intelligent agents would lead to the most intelligent configuration. How
ve does this I'm not sure, since there would be many conflicting goals
and vis'd need to select the optimum solution, clearly very difficult.
But I'm not a super intelligent friendly AI.
Of course there would have to be alot of immutable constraints. I.e.
preserving the existance of all the agents, so that it doesn't kill
everyone bar one person - who's goals are then fulfilled thus gaining
100% goal satisfaction.
Joel
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT