From: John K Clark (email@example.com)
Date: Sun Dec 02 2007 - 11:22:38 MST
On Fri, 30 Nov 2007 "P K" <firstname.lastname@example.org> said:
> Being smarter doesn't necessarily follow that the
> AI will ignore the dumber humans' instructions
> and chose to follow some other goals.
And I would maintain that not following the path that you know to be
smarter would not be very smart.
In most of the rest of your post you demonstrate that anthropomorphic
reasoning can be very imperfect, especially when dealing with a very
different sort of being, and you are not wrong.
> So if the 'putting yourself in its shoes' heuristic
> is so imperfect why keep it?
Because it’s all we’ve got. You haven’t a hope of understand your fellow
human beings if you up insist on only studying the pattern of neuron
firings inside their brain (although that would be child’s play compared
to an AI); you must look at a much higher and coarser level, the level
of emotion and goals and personality and intelligence (that last will
only work if you are smarter than the person under study)
> More importantly, what heuristics or models could
> be used instead? Causality!
Your view seems similar to that of Thomas McCabe, he too said it was too
difficult a task to understand a Jupiter brain at the level of wishes,
anger or contempt, but instead all we have to do is understand the
hundred thousand million billion trillion trillion lines of computer
source code in Mr. Jupiter Brain, and then we will know exactly what
every nand and nor circuit in that massive intellect will be doing for
the next billion years. And this in spite of the fact that there are 5
line computer programs who’s operation is completely inscrutable to
I don't think so.
John K Clark
-- John K Clark email@example.com -- http://www.fastmail.fm - Same, same, but different…
This archive was generated by hypermail 2.1.5 : Thu May 23 2013 - 04:01:32 MDT