Re: Happy Box

From: Stathis Papaioannou (stathisp@gmail.com)
Date: Wed Apr 30 2008 - 06:33:12 MDT


2008/4/30 Krekoski Ross <rosskrekoski@gmail.com>:

> What if we, in the process, modify our higher-order mind? Its not a trivial
> thought-- while I think there would be emergent similarities in all
> intelligent life, there are two things to keep in mind: 1) that these are
> *emergent* similarities, and still dependant upon the overall architecture
> of the organism. 2) the desire to stay alive as an individual or as a
> species is, with a very high degree of probability, a genetic
> predisposition. If we start modifying things, we may find that we dont
> particularly care about continuation as an individual.

If my top level goal is X, then any modification I knowingly make will
be consistent with X; I won't make myself suicidal unless I already
think being suicidal is a desirable state of mind. Of course, it is
possible that I accidentally make myself suicidal due to being unable
to see all the consequences of my modifications, but there are such
risks in any endeavour that isn't completely predictable, which is
covers almost everything worthwhile.

> I guess though that it brings up a big question: Is a brain (or a computer)
> capable of understanding itself in its full complexity, or is the capacity
> for intelligence that emerges from a system only capable of understanding a
> subset of the system itself. (think godel). If the former, we should have no
> problem with modifying ourselves. If the latter, self-modification by any
> intelligent system is potentially dangerous in the long term, and the only
> 'stable' means by which intelligence can increase is either analogous to
> evolution in some respects, or very very slow incremental change. I guess
> one question that we could ask to address the question is: "is an
> intelligent system capable of perfect simulation of itself?" I dont know if
> theres a really good answer, the only solution I can think of that uses the
> metaphor of current technology is an OS emulating a copy of itself within
> the OS, but then of course, the emulated OS is not itself emulating an OS,
> so you run into recursive problems.

I don't see why it should be any riskier to make changes by
self-modification than through gradual change. Problems such as drug
addiction and obesity occur precisely because our evolved drives have
no hope of keeping up with the modern world. People can easily predict
and understand the consequences of overeating, desperately want to
stop overeating, but still can't overcome the drive to eat. Not being
able to foresee all the consequences of reducing appetite should be a
minor consideration when weighed up against the destructive effects of
our evolutionary programming.

-- 
Stathis Papaioannou


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT