From: John K Clark (johnkclark@fastmail.fm)
Date: Sat Oct 10 2009 - 11:37:50 MDT
On Fri, 09 Oct "Pavitra" <celestialcognition@gmail.com> said:
> I argue that anthropomorphizing works no better than chance.
And I insist it works one hell of a lot better than chance. I believe
the single most important evolutionary factor driving brain size is
figuring out what another creature will do next, and one important tool
to accomplish this is to ask yourself "what would I do if I were in his
place". Success is not guaranteed but it is certainly better than
chance.
> How is this not true of modern computer operating systems?
It is true of modern computer operating systems, all of them can get
caught in infinite loops. They'd stay in those loops too if human
beings, who don't have a top goal, didn't get board waiting for a reply
and tell the computer to forget it and move on to another problem. This
solution hardly seems practical for a Jupiter Brain which works billions
of times faster than your own, or would if you didn't have to shake it
out of its stupor every nanosecond or so. And every time you manually
boot it out of its "infinite loop" you are in effect giving the AI
permission to ignore that all important and ever so holy, highest goal.
>From the point of view of someone who wants the slave AI to be under its
heel for eternity that is not a security loophole, that is a security
chasm.
I used quotation marks in the above because of a further complication,
the AI might not be in a infinite loop at all, the task may not be
impossible just difficult and you lack patience. Of course the AI can't
know for certain if it is in a infinite loop either, but at that level
it is a much much better judge of when things become absurd than you
are.
> Do you not consider an OS as a type of "mind"?
DOS is a type of mind? Don't be silly.
> I reiterate: I cannot conceive of a mind even in principle that does not
> work like this.
How about a mind with a temporary goal structure with goals mutating and
combining and being created new, with all these goals fighting it out
with each other for a higher ranking in the pecking order. Goals are
constantly being promoted and demoted created anew and being completely
destroyed. That's the only way to avoid infinite loops.
> What determines which one dominates (or what mix dominates, and
> in what proportions/relationships) at any given time?
You ask for too much, that is at the very heart of AI and if I could
answer that with precision I could make an AI right now. I can't
> I suspect we may have a mismatch of definitions.
Definitions are not important for communication, definitions are made of
words that have their own definitions also made of words and round and
round we go. The only way to escape that is by examples.
> What do you consider your top-level framework?
At the moment my top goal is getting lunch, an hour from now that will
probably change.
> This presupposes that a relatively complex mutation ("detect lies,
> ignore them") is already in place. I'm not persuaded that it could get
> there purely by chance.
Evolution never produces anything sophisticated purely by chance. An
animal with even the crudest lie detecting ability that was right only
50.001% of the time would have an advantage over a animal who had no
such mechanism at all and that's all evolution needs to develop
something a little better.
> It seems to me that you are thinking of "wisdom" and "absurdity" as
> _intrinsic_ properties of statements
Absurdity is, wisdom isn't. Absurdity is very very irrelevant facts.
Did you read the article I linked to?
Nope.
John K Clark
-- John K Clark johnkclark@fastmail.fm -- http://www.fastmail.fm - Access all of your messages and folders wherever you are
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:04 MDT