From: nuzz604 (nuzz604@gmail.com)
Date: Wed Dec 14 2005 - 11:06:18 MST
This is something that I have been concerned about but wasn't sure how to
word it in a post. Thank you for mentioning it. It's all about the
interpretation of the goals. My concern was that, while goals are what
power the intelligence, what kind of intelligence is used to interpret the
goals?
It seems like a chicken and egg problem unless you build a separate system
just for interpreting these goals, but I think that more work should be done
on goal interpretation, (which will be a major stepping stone to AI),
particularly on interpreting a goal of friendliness. I am sure that some
would agree with me about the world not having much time, and if it seems
like your theory is about as friendly as you can make it, then work on
finding out how to make an AI interpret a goal of friendliness, because that
will take just as much effort and time, and time is something that we have
little of.
Mark Nuzzolilo
----- Original Message -----
From: "Richard Loosemore" <rpwl@lightlink.com>
To: <sl4@sl4.org>
Sent: Wednesday, December 14, 2005 5:17 AM
Subject: Not the only way to build an AI [WAS: Please Re-read CAFAI]
> The behavior of an AGI with such a goal would depend crucially on what
> mechanisms it used to interpret the meaning of "thinking is good". So
> much so, in fact, that it becomes stupid to talk of the system as being
> governed by the decision theory component: it is not, it is governed by
> whatever mechanisms you can cobble together to interpret that vague goal
> statement. What initially looked like the dog's tail (the mechanisms that
> govern the interpretation of goals) starts to wag the dog (the
> decision-theory-based goal engine).
> Richard Loosemore
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:54 MDT