From: Jef Allbright (jef@jefallbright.net)
Date: Sat Dec 17 2005 - 12:48:26 MST
Phillip -
I experienced mixed feelings about my comment right about the moment I
hit Send, so I'm glad what I said didn't demotivate you. I think it's
important for people to have these discussions to test and refine
their thinking, but beyond a certain point it's mostly regression to
the mean. I'm personally close to boiling over with frustration at the
inefficiencies of the current discussion lists, and I haven't decided
whether I should get some distance to regain my reserve (my usual
mode) or jump into the fray and stir things up (sometimes useful, but
sometimes destructive.) A better strategy would be to personally
spend more time with like-minded people building a more effective
collaborative knowledge framework, but I haven't found a way to fit
that into my schedule.
On 12/17/05, Phillip Huggan <cdnprodigy@yahoo.com> wrote:
> Sorry, my post was crap. I never presented any argument beyond circular
> reasoning for *why* we shouldn't just slice and dice Nemo. A RPOP would
> need less energy to merely feed an already existing human, cure a brain
> tumour, immortalize somatic cells, etc, than ve would to assemble a new
> better person from scratch. Efficiency would dictate we would not be
> steamrolled.
>
> I was making a garbled attempt to expose two very different ethics, the
> correct one to apply depends on whether or not we have a shot at an infinite
> energy source. And I was also confusing the process of human utilitarianism
> with the functions of a RPOP.
> To make the world better, humans need diversity. RPOP doesn't need anything
> but a goal structure and matter/energy. Two strictly identical persons are
> inferior to two separate skill sets. So I was against the idea of AGI
> tiling because it seems to indicate a maxim! um universal quality-of-living
> plateau will be found and then maintained until the end of time. This would
> indicate the sum total of all man years in the present and future is finite.
> But if there really might be a (dangerous) energy source out there that is
> abundant or efficient, the correct course of ethics is to bask in our
> inferior energy source for as long as possible, and then take a shot at
> "fountain-of-youth". However we might miss out entirely on this shot if the
> AGI is programmed only to tile and to ignore this possibility. If an AGI is
> programmed to maximize a finite value, it won't process the infinity.
> Consider an island slowly falling into the ocean. For the inhabitants,
> the ethics really should be subjective and selfish. Now consider the same
> island rising from the ocean. Human rights emerge as a possibility. If we
> pollute our immediate surroundings with AGI-tiled (better) energy r! ivals,
> we have turned what appeared to be a positive sum ethical environment into a
> zero-sum free for all game of machivellian survival.
> "All great things bring about there own destruction through an act of self
> overcoming". They don't get saved by AGI. Beyond providing physical
> sustinence essentials, I don't see how else an AGI can really help us.
>
> Jef Allbright <jef@jefallbright.net> wrote:
> On 12/16/05, Phillip Huggan wrote:
> > In the absence of conscious entities in the universe, morality is
> relative.
> > But as soon as one little fishy exists, actions within the future
> light-cone
> > of fishy Nemo acquire a moral framework (in as far as the actions affect
> > Nemo). We can deduce this by our own conscious 1st hand appreciation of
> the
> > faculties of pleas! ure and pain (in all forms we experience them as).
> With
> > these faculties our actions become moral, as far as our engineering
> prowess
> > extends across sentient entities in existence. The asymetric way we should
> > value entities in existence much more highly than seemingly identical (and
> > often superior) entities we could create, is because: In the absence of
> > conscious entities in the universe, morality is relative.
> > Ethical behaviour does not apply to the sum total of all present and
> > future conscious entities in our future light-cone. It only applies to the
> > sum total of all present consc! ious entities in our future light-cone. If
> > an AGI kills us to make room for one trillion humans, and then creates the
> > humans, the correct future ethical judgement of AGI's actions would be
> based
> > upon how well the AGI served those trillion human's needs. But the initial
> > act of killing us off could never be corre! ctly justified because at the
> time
> > just before the AGI's murder rampage, us 6 billion humans formed the only
> > objective metric by which AGI's actions could be judged (also animals
> too).
> > 5 billion years ago, tiling earth with orgasmium would have been fine.
> > But as soon as Nemo appeared, it became necessary for an AGI to consider
> the
> > well-being of Nemo if the AGI was to really be classified as "friendly".
> > Now in the 21st century a friendly AGI could only be justified in
> > sacrificing us humans under very extraordinary circumstances.
> >
>
> I propose that this post, sincere in its intent, dense in its
> references to philosophical concepts, and intractable in its
> semantics, be used to test the capabilities of our future platform for
> untangling, disambiguating and elucidating such discussion.
>
> - Jef
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:54 MDT