RE: Fighting UFAI

From: pdugan (pdugan@vt.edu)
Date: Thu Jul 21 2005 - 11:42:57 MDT


My question on the matter is: is a single human mind could be expected to go
crazy in a "weird" universe would it be anthropomorphic to suggest a non-human
mind would experience the same? If it would be anthropomorphic, than what is
it about us anthropos that renders incompatability with strange ontological
substrates? If this difficulty of ontological andaptation were investigated
perhaps it would shed light on design issues relating to an AGI experiencing
philosphical crisis versus and AGI greatly benifiting from the experience and
gaining information patterns worthy of being called "wisdom".

   Of course, if the limits of "knowing everything about something" are so
fine grained, then perhaps with the right attitude a human or AGI would be
able to take an experience grounded in weird ontology in stride, signifying a
much lessened risk of philosophical crisis given radically differing qualia.

   Patrick

>===== Original Message From Chris Capel <pdf23ds@gmail.com> =====
>On 7/21/05, Tennessee Leeuwenburg <hamptonite@gmail.com> wrote:
>> > Eh? What about emotion is so special that it would require anything
>> > more than a Turing machine to implement as part of an GAI? (That begs
>> > the question of whether it's even desirable for Friendliness. That one
>> > seems to be emphatically NO.) How would quantum computing help
>> > anything?
>>
>> Allow me to respond to this entirely out-of-context, as this was a
>> debating point against something I didn't stay. Rather, let me pose a
>> thought experiment to you.
>
>[clip thought experiment]
>
>> Has she learnt anything new about colour? If you accept that she has,
>> then qualia must be real, because she already knew everything that
>> science could inform her about the world and about colour. There must,
>> therefore, be something real about colour which is not addressed by
>> science.
>
>Well, I read a good essay by Dennet examining this very experiment in
>The Mind's I. Basically, his argument was that the intuition pump is
>misleading because of the phrase "learned everything about
>vision/seeing red". We really don't know what knowing "everything"
>about this subject would be like, so our intuitive idea of what this
>amount of knowledge is, is approximately what a very accomplished Ph.D
>or two or three would collectively know on the subject. But taken
>literally, it implies almost infinite amounts of knowledge, most of it
>mostly useless. But certainly we can't rule out the possibility that a
>scientist living in a time where the science of the brain is mature
>and mostly complete would be able to use all of the existing
>scientific knowledge, and knowledge of how her own brain is wired, to
>know exactly what visual impression she would receive from a red
>object. In fact, the situation--knowing "everything" about
>something--is so foreign to us that using it as a thought experiment
>is practicing philosophy on rather shaky grounds.
>
>Actually, bringing this back to the original point (did this thought
>experiment bear on that point?), I do lend some credence to the
>existence of qualia, and still I have no trouble believing that they
>could arise on purely non-quantum biological devices, or even
>electronic, devices. Now, I have no reason to believe that they do,
>except that most thought apparently does, and it would be quite an
>exception, and a violation of Occam's Razor, to say that it requires a
>fundamentally different kind of device to support them, and I just
>don't see the evidence, nor the justification. The same way that
>Occam's Razor seems to some to discount the possibility of qualia,
>those who see their primary experience as lending evidence to qualia
>ought to apply Occam's Razor to the idea that qualia are somehow
>exceptional processes in the brain, ones that can't be modeled the
>same way the rest of the brain can.
>
>> > I don't quite understand what kind of threat you could see concerning
>> > an AI suddenly understanding a different ontology and going crazy. How
>> > likely would this be?
>>
>> The quote marks indicate that you are replying to me, but in fact I
>> didn't suggest this.
>
>I didn't mean to imply this. But I believe pdugan did suggest, and I
>could be wrong, that there is a danger in the possibility that an AI
>would find some other universe, or some other mode of existing in this
>one, that lends itself to different modalities and a different
>ontology. I was just inquiring as to what he thinks the exact nature
>of the threat that situation would pose is, besides being existential.
>My first impression is that it's rather unlikely, but he didn't do
>much explaining.
>
>Chris Capel
>--
>"What is it like to be a bat? What is it like to bat a bee? What is it
>like to be a bee being batted? What is it like to be a batted bee?"
>-- The Mind's I (Hofstadter, Dennet)



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT