From: Russell Wallace (email@example.com)
Date: Thu Jul 14 2005 - 13:29:06 MDT
On 7/14/05, Eliezer S. Yudkowsky <firstname.lastname@example.org> wrote:
> I disagree that they are completely different. Human-ish wisdom requires
> human-ish intelligence. You can't have a moral debate without the ability to
> debate. The fallacy is in presuming that the ability to optimize a star
> system requires moral complexity, that is, that intelligence implies what
> humans would call wisdom in the domain of morality; as far as I can tell, you
> can have a simple constant utility function that does not invoke difficult
> computation, let alone intelligent debate.
Yes, wisdom requires intelligence, but intelligence does not
necessarily require wisdom.
> See the past SL4 discussion with the subject line "Darwinian dynamics unlikely
> to apply to superintelligence".
I did. My position is that a superintelligent AI of the operating
system type _can_ (with a lot of skill and care) be designed so as not
to undergo Darwinian evolution, and _needs_ to be so designed (to
avoid the scenario I mentioned). IIRC, this was also your position in
that discussion; correct me if I'm wrong?
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT