From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Tue Aug 19 2003 - 11:35:06 MDT
I am in agreement with James Rogers, due to this generalization from
personal experience: When you know what you are doing, there is only ever
one thing *to* do, even if there is more than one way to do it; the
options you have are not nervously ambiguous; they are not chosen in
uncertainty as to the function being fulfilled. There may be more than
one way to build an AI if you do *not* really understand what you are
doing; evolution's construction of an evolution-unfriendly humanity comes
under this header. But if you know what you are doing, then on the most
important level of description, your work consists of choosing
implementations for required goals that have only one obvious correct
description. There are many kinds of functional processes with
Bayes-structure; there is only one Bayes' Theorem, there is only one thing
that those processes are doing. That is how you know you are starting to
understand something - when your apparent options vanish, merging into
alternate implementations of a function that is not alterable. A high
school math student who is following memorized rules of algebra to solve
simultaneous equations might imagine that the operations, by being applied
in a different order, might yield different answers. He might take a stab
here, take a stab there, manipulate the equation this way and that - look
at how many different things there are to do! Maybe if you find a special
order of operations, you can make the answers come out differently?
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:42 MDT