From: Randall Randall (randall@randallsquared.com)
Date: Thu Jun 03 2004 - 17:26:48 MDT
On Jun 3, 2004, at 4:10 PM, Damien Broderick wrote:
> At 03:07 PM 6/3/2004 -0400, randallsquared wrote:
>> Using the term "optimization process"
>> doesn't predispose one's reader to imagine himself as the
>> intelligence, or otherwise anthropomorphize it. In that sense,
>> I think it's a far better term.
>
> The downside is that it predisposes people to imagine some immense
> complex re-entrant system as a relentlessly univocal monologist
> vulnerable to flinging itself into a single appalling attractor.
It seems to me that the reason for the focus on such a
"monologist" is that a system designed to optimize a
single thing is likely far easier to build than a system
which not only has a balance of goals but a mechanism to
make sure one of the goals doesn't actually win the
struggle. After all, humans sometimes have this problem,
too; it's not too uncommon for a human to become maniacally
focused on a goal. People seem to be more prone to this as
intelligence increases, so even from the standpoint of
experience with human intelligences, it's plausible that
higher intelligence makes one less likely to see the value
of having a multiplicity of goals.
But, in any case, building a very clever system to reach
a goal (Friendliness) seems to me to be more in line with
what Eliezer is doing than building a generalized,
humanlike person. Since it seems easier to build that
than a humanlike person, it would be reasonable to worry
about the attractors that other projects might fall into.
> Damien Broderick
> [speaking for One True and True Knowledge]
That must be a kaleidoscopic experience. :)
-- Randall Randall <randall@randallsquared.com> 'I say we put up a huge sign next to the Sun that says "You must be at least this big (insert huge red line) to ride this ride".' -- tghdrdeath@hotmail.com
This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:22:38 MST