From: Michael Anissimov (altima@yifan.net)
Date: Mon Jun 17 2002 - 03:46:52 MDT
Anand wrote:
      >Eugen Leitl:
>
>Please consider summarizing, or referencing, your objections to non-
brute
>force seed AI development and SIAI's theoretical work on 
Friendliness.  In
>response, I would ask that Ben, Eliezer, Peter, and others, to consider
>providing refutations, or referencing specific refutations, to Eugen's
>objections. This information would assist some of my activities, and
>possibly the activity of others.
Yes, this would greatly assist me in my activities as well, and I thank 
you, Anand, for this inquiry, as well as your inquiries on CFAI - 
Eliezer's answers were very complicated, but exceedingly crisp and 
explicit, and are answers to philosophical questions which many people 
are unfortunately stuck on while trying to think about Friendly AI as a 
concept.
It's good to see you leaning towards incorporating extravolitional 
variables into your moral model, Eliezer - I've always seen the pure 
volition model as too absolute and simple to satisfy some people, 
including myself before I understood that the moral-model trajectory of 
an AI was not so much about the initial content but the initial 
architecture and semantics.  In any case, while it is still compatible 
with Friendliness as a philosophy for transferring moral complexity to 
an AI, a purely volitional model can unintentionally create a stumbling 
block while thinking about seed AI.
It's also good to see Eliezer and Gordon arguing for an 
unanthropocentric ethical system - you would guess its the logical 
thing attitude to adopt, but I suppose it's easier than I think to get 
attached to humanity emotionally and make unfairly species-centric 
moral decisions, just like how it's easy, in a sufficiently undeveloped 
memetic environment, to get attached to a specific racial group 
emotionally and make race-centric moral decisions as well.  But since 
the Singularity will not necessarily effect only homonid sentients, but 
possibly all of sentientkind existing right before the Singularity, for 
all eternity, the moral model I tend to visualize being pertinent to 
Humanity's Final Invention does not favor any particular sentient 
species over any other.
And as another example of what Friendliness is supposed to be:
Today, a poster on BJKlein.com remarked that Eliezer's writings 
were "almost perfect definitions of objective morality...but they 
neglected how to treat animals".  Obviously, this person is missing the 
point - it doesn't matter if Eliezer doesn't mention how to treat 
animals in his writings, because he isn't trying to code a self-
improving robocop static morality AI, he's trying to code a Friendly 
seed AI.  The latter has the ability and desire to change and *improve* 
ver model of morality like any idealized moral arbiter would, the 
former is an obsolete Asimovian construct.  Personally, I dislike the 
idea of murdering any organism with a nervous system for food - I know 
Eliezer doesn't, and in the first month of hearing about Friendly AI 
and a little bit of the theory, I had a major problem with this, 
thinking he would "tell the AI that killing animals is ok".  But then I 
read CFAI, and realized it didn't really matter - Eliezer is coding an 
AI for *sentience*, not solely for humanity (although humanity will 
likely represent all sentience at the advent of the Singularity), and 
certainly not for any race, person, or philosophy.  So why worry?  But 
in any case, non-Singularitarians often judge Singularitarians by their 
professed moral codes when estimating the validity of their theory, 
when considering whether additional investigation would be worthwhile.  
For this reason, it might be smart for Singularitarians to do what they 
are already often doing - set a moral ideal, strive for that ideal, 
while continuously making the point that Friendly AI is not about any 
specific set of moral content, but a morality-generating, self-
enhancing architecture that initially starts with a seed of observer-
independent volition-respecting altruism.
A few years back back I had a moral/philosophical crisis - what to do 
if, for every casual, everyday motion of mine - taking a step, for 
example, corresponded to, and resulted in, an immense amount of 
suffering or pain for some large set of sentient beings in an parallel 
world?  If we're in a simulation, it *could*, *maybe* be wired that 
way.  But how could we know?  If we exited the simulation.  The lesson 
I learned from this, beyond realizing that pursuing the Singularity is 
the direct pursuit of higher ethics and morals, is that it is 
*impossible* to define any fixed point in morality without *infinite* 
intelligence - presumably impossible, because if the moral arbiter 
gained even a little bit of extra intelligence, ver whole moral system 
could be entirely overthrown!  All you can do is create an autonomous 
mind, that, like human beings, would be able to navigate the 
hyperdimensional hypothetical space of all possible moralities, seeking 
out the moral and philosophical ideals for all people, given enough 
technology to consensually implement them.
Michael Anissimov
-----------------------------------------------------
http://eo.yifan.net
Free POP3/Web Email, File Manager, Calendar and Address Book
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT