From: Marc Geddes (marc_geddes@yahoo.co.nz)
Date: Sat May 22 2004 - 03:48:02 MDT
--- Eliezer Yudkowsky <sentience@pobox.com> wrote: >
Eliezer Yudkowsky wrote:
> >
> > I think I have to side with Keith. I fear that
> human
> > self-modification is far more dangerous than I
> would once have liked
> > to imagine. Better to devise nutritious bacon,
> cheese, chocolate,
> > and wine, than dare to mess with hunger - let
> alone anything more
> > complex. You would practically need to be a
> Friendly AI programmer
> > just to realize how afraid you needed to be, and
> freeze solid until
> > there was an AI midwife at hand to help you *very
> slowly* start to
> > make modifications that didn't have huge
> unintended consequences, or
> > take you away from the rest of humanity, or
> destroy complexity you
> > would have preferred to keep.
>
> Some examples of possible consequences, off the top
> of my head:
>
> You've got memories of enjoying cheeseburgers. What
> happens to the
> memories when the sensory substrate of recollection
> shifts? Are you
> going to keep the old hardware around for
> recollection? Will you add in
> a complex system to maintain empathy with your old
> self?
>
> Your old sense of taste was fine-tuned and
> integrated into your sense of
> pleasure and pain, happiness and disgust, by natural
> selection. Natural
> selection also designed everything else keyed into
> those systems. If
> you pick new senses, do they make sense? Does the
> pattern subtly clash
> with the pattern of systems already present?
>
> Will your new sense of taste be more or less complex
> than your old sense
> of taste? More intense or less intense? If more
> intense, does the new
> sense of taste balance with a mental system that is
> known to stay sane
> only under ancestral conditions of environment and
> neurology? Consider
> the effects on humans of non-ancestral Pringles and
> chocolate cake,
> loads of sugar and salt and fat not present in any
> ancestral foods.
> Adopting a more intense taste system can have the
> same effect, if the
> rest of the mind isn't upgraded accordingly to
> balance with the
> increased intensity of sensation.
>
> Maybe you would prefer to gradually grow into new
> tastes? What does the
> sharp discontinuity of direct self-alteration do to
> your sense of
> personal continuity?
>
> If the new taste sensation is more intense, do you
> become addicted to
> the act of self-modification for more intense
> sensations?
>
> You're eliminating cognitive complexity of yourself
> by getting rid of
> the complex pattern of the old system. Maybe you
> would prefer not to
> eliminate the old complexity - learn to appreciate
> lettuce *in addition
> to* Pringles?
>
> Can you really appreciate the long-term consequences
> of altering your
> mind this way? Does the new design you decided upon
> make any sense with
> respect to those criteria that you would use if you
> thought about the
> problem long enough?
>
> What is the long-term effect of adopting the general
> policy of
> eliminating old complexity that inconveniences you,
> and inscribing new
> complexity that seems like a good idea at the time?
>
> Other humans share your current taste sensations.
> Think of your awkward
> refusal of foods at dinner, the mainstream artistry
> of cooking you'll no
> longer be able to appreciate. Are you distancing
> yourself from the rest
> of humanity? Lest someone chime in that diversity
> is automatically
> good, let me add that this is one hell of a
> nontrivial decision.
>
> If you can alter your taste buds any time you feel
> like it, will it
> destroy, or alter, the perceived challenge and fun
> of cooking? Consider
> the effect on baseball if people could just run
> around the bases any
> time they wanted.
>
> And finally, what about all the consequences, and
> categories of
> consequences, that you haven't foreseen? When you
> imagine the act of
> self-modification, you will imagine only the easily
> mentally accessible
> consequences of the act, not the actual
> consequences. Just because you
> can't see the doom, doesn't mean the doom isn't
> there.
>
> --
> Eliezer S. Yudkowsky
> http://intelligence.org/
> Research Fellow, Singularity Institute for
> Artificial Intelligence
Well, to some extent humans are 'self modifying'
themselves all the time. Babies > Kids > Teenagers >
Adults ? There may be useful analogies there.
Also, I have always been skeptical that
'self-modification' was any where near as straight
forward as you thought, even for seed A.I's. Aren't
there analogous issues that apply to seed A.I's as
much as to humans?
As you know, I'm mighty skeptical of the so-called
'philosophers of mind' who claim that there is no
'Self'. The abstract invariants behind the goal
system would form a 'Self'. So any self-modification
has to preserve the invariant to maintain personal
identity. Even a seed-A.I especially designed to
self-improve runs into the problem of maintaing
personal identity. Wouldn't even the seed-A.I be
subject to some of the same constraints that limited
human evolution? I always felt that it would have to
be more like a legacy system rather an a continuous
'redesign from scratch'.
=====
"Live Free or Die, Death is not the Worst of Evils."
- Gen. John Stark
"The Universe...or nothing!"
-H.G.Wells
Please visit my web-sites.
Science-Fiction and Fantasy: http://www.prometheuscrack.com
Science, A.I, Maths : http://www.riemannai.org
Find local movie times and trailers on Yahoo! Movies.
http://au.movies.yahoo.com
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT