From: Ben Goertzel (ben@goertzel.org)
Date: Fri May 09 2003 - 18:11:35 MDT
> History's vivid lessons show us that it is not good intentions that we
> should trust, but good ideas. Ideas that are rational, timely and
> appropriately applied to the real world have been immensely beneficial
> in the past, and will continue to be so in the future. Upon reading
> the works you are critiquing, I was struck by how little mention there
> was of anything like 'good intentions'. Rather, the emphasis was on
> rational thinking with the purpose of avoiding death and destruction.
> The emphasis was on ideas and how to apply them to the real world. Good
> intentions? Trivially, yes. But the main thrust of FAI is solid ideas
> that could be practically applied to solve a real-world problem, with
> great leverage... it is difficult to find fault with that.
>
> Yours sincerely,
>
> Michael Roy Ames
However, Bill is correct that Eliezer's plans do not give much detail on the
crucial early stages of AI-moral-instruction. Without more explicit detail
in this regard, one is left relying on the FAI programmer/teacher's
judgment, and Bill's point is that he doesn't have that much faith in
anyone's personal judgment, so he would rather see a much more explicit
moral-education programme spelled out.
I'm not sure Eliezer would disagree with this, even though he has not found
time to provide such a thing yet.
My own feeling is that it's a few years too early to create such a
programme -- we'll need to get some experience teaching AGI systems simpler
things than morality first, and then we'll understand a lot better how to
create a proper moral educational programme for AGI's...
-- Ben G
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:42 MDT