From: Ben Goertzel (ben@goertzel.org)
Date: Fri Oct 03 2003 - 07:18:53 MDT
*****
My first posting will be a comment on Mr Yudkowsky's meaning of life FAQ
(http://yudkowsky.net/tmol-faq/tmol-faq.html)
> 2.5.1: Can an AI, starting from a blank-slate goal system, reason to any
nonzero goals?
*****
It seems to me that the very concept of a "blank-slate goal system" is not
well-founded.
Of course, one can create an AI system with a data structure called "goals",
and leave this data structure blank!
But, any system that acts in the world will have implicit goals, i.e. there
will be some functions that the system "acts like it's trying to optimize."
You could argue that if a system's behavior is truly random, there will be
no implicit goals in this sense. But remember that "truly random" has
meaning only for infinite entities, not for finite ones (cf the algorithmic
theory of randomness). For a finite system, one can never say for certain
that there are no implicit goals.
If one has a system that is designed to have a blank-slate goal system, its
gonna have implicit goals, but they may be quasi-random ones, which one did
not plan and cannot control....
Now, if one wants to know whether a non-explicitly-goal-oriented system
could ever achieve high levels of autonomous intelligent behavior via
iterated self-modification --- my answer is: YES, but in the manner of
"origin of life" rather than in the manner of "iterated improvement of a
mind." An intelligence without explicit goals might eventually
self-organize into a complex evolving substrate from which a self-improving
intelligence could emerge. But it could not in itself be a single
self-improving, self-organizing mind.
Note that the computational requirements for an origin-of-life type system
are gonna be VASTLY greater than the reqs for an individual mind.
-- Ben G
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:42 MDT