Re: CNN article with Bostrom interview and Kurzweil quotes

From: R. W. (rtwebb43@yahoo.com)
Date: Wed Jul 26 2006 - 13:09:08 MDT


I think you hit upon the crux of the matter -- what will we do without problems or necessity? I believe we would have to continuously invent new ways of having "fun". Or rather, creating significantly more complex problem states so that the time between halts increases. It seems that in every engineering problem I run into, I am always trying to achieve some state of control -- a steady state. Once there, I move on to another problem. This is the meaning of my life. I wonder what will happen if all of the problems presented to me became iteratively trivial. So trivial that my ability to notice and solve a problem; even predict and solve a problem before it ever manifested itself would approach an asymptotic time interval of zero. I would experience continuous halting states so much so that the halting states themselves would seem continuous. Through iterative learning and continuous self-improvement, the probability of error in my predicting future problems should
 also approach zero. I guess a better question is "Are there an infinite number of non-trivial classes of problems?" Without a problem set what's the point? You permanently halt in a state of bliss and all knowingness? At that point, I would hope to be intelligent enough to create new problems. The funny part is I would seamlessly solve them while creating them. Ultimately, an uber me would need to create an intractable holonic problem with which I would attempt to solve level by complex level in order to keep from being bored. I would create new constraints, and limitations in order to prevent boredom. I would continuously create new classes of problems. I would have the G-d dilema. If I could ever approach this level of reality. I think I could make G-d laugh really hard!
   
  
Charles D Hixson <charleshixsn@earthlink.net> wrote:
  R. W. wrote:
> Yes. ACCEPTING mortality. I don't expect love or even rationality.
> In fact, I don't expect any response. What good is there in outliving
> all the stars in the universe?
...

You will probably be interested in a book that Charles Stoss is working
on. It's working title is "Halting State".

I, personally, expect that there is a finite amount of extension that is
reasonable for any particular person. I also suspect that it's not the
same for all. Of course, my suspicion is on very poor footing,
evidentially speaking. The current evidence is basically that when
people's bodies start breaking down in ways that they believe to be
irreversible, they tend to lose interest in keeping them going. Mine is
just a suspicion that any state machine will eventually end up in either
a loop or a halting state. And that if you notice the loop, you'll
eventually choose to get out of it. So if you are intelligent enough to
notice the loops you get into, you'll eventually arrive at a halting state.

Now as to whether this would occur before or after the last star has
died... I don't see this specified by the problem conditions, so I
expect a variability in the responses. A question that is to me more
interesting is "would these minds see merger as a viable option?" (Think
Spock's mind meld made permanent.) If some trans-human entity has a
stronger "theory of mind" of your mind than *you* do, can you die
without it being willing to allow it's model to terminate? (I.e., what
do you mean "I".) Etc.

                 
---------------------------------
Do you Yahoo!?
 Next-gen email? Have it all with the all-new Yahoo! Mail Beta.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT