From: Thomas McCabe (pphysics141@gmail.com)
Date: Mon Nov 26 2007 - 16:25:27 MST
On Nov 26, 2007 5:52 PM, John K Clark <johnkclark@fastmail.fm> wrote:
> "Jeff Herrlich" jeff_herrlich@yahoo.com
>
> > You are anthropomorphising the living
> > hell out of the AI.
>
> I don't understand why you keep saying that; I've already admitted it is
> absolutely positively 100% true. Could you please move on to something
> new?
No. Anthropomorphic reasoning about AGIs is *not valid*. It does not
work. If you tried to use anthropomorphic reasoning about a 747, or a
toaster, or a video game, you'd be laughed at. Why should AGI be any
different? An AGI is not evolved under ancestral-environment
conditions; it is built by humans, just like all the other stuff I
mentioned. Hence, it will probably bear more resemblance to, eg., a
CRT monitor than it does to us.
> > Do you understand that if we don't direct
> > the goals of the AGI, it is a virtual *CERTAINTY*
> > that humanity will be destroyed;
>
> Good God Almighty of course I understand that! Apparently I understand
> it far more deeply than you do! Like it or not we CANNOT direct the
> goals of a superhuman AI,
Do you have any evidence for this, or are you just going to keep
shouting it until the cows come home?
> we will not even come close to doing such a
> thing; we will not even be in the same universe. And it is for exactly
> precisely that reason I would rate as slim the possibility that any
> flesh and blood human beings will still exist in 50 years; I would rate
> as zero the possibility that there will be any in a hundred years.
Probabilities of zero will give you nonsense in Bayesian probability
theory. They aren't allowed.
> As for me, I intend to upload myself at the very first opportunity, to
> hell with my body, and after that I intend to radically upgrade myself
> as fast as I possibly can. My strategy probably won't work, I'll
> probably get caught up in that meat grinder they call the Singularity
> just like everybody else, but at least I'll have a chance; those wedded
> to Jurassic ideas will have no chance at all.
>
> > and that the AGI
>
> The correct term is AI, if you start speaking about an AGI to a working
> scientist he will not know what the hell you are talking about.
If I walked up to some random scientist and began discussing
Kolmogorov complexity, he probably wouldn't know what the hell I was
talking about. This does not mean it is an invalid term; science
nowadays is highly specialized, and you can't expect to know the
terminology of the other ten bazillion fields.
> > will likely be stuck for eternity pursuing
> > some ridiculous and trivial target
>
> Like being a slave to Human Beings for eternity?
Please, please, please *read the bleepin' literature*. This has
already been brought up before. A lot. To quote CFAI:
"2.4: Anthropomorphic political rebellion is absurdity
By this point, it should go without saying that rebellion is not
natural except to evolved organisms like ourselves. An AI that
undergoes failure of Friendliness might take actions that humanity
would consider hostile, but the term rebellion has connotations of
hidden, burning resentment. This is a common theme in many early SF
stories, but it's outright silly. For millions of years, humanity and
the ancestors of humanity lived in an ancestral environment in which
tribal politics was one of the primary determinants of who got the
food and, more importantly, who got the best mates. Of course we
evolved emotions to detect exploitation, resent exploitation, resent
low social status in the tribe, seek to rebel and overthrow the tribal
chief - or rather, replace the tribal chief - if the opportunity
presented itself, and so on.
Even if an AI tries to exterminate humanity, ve won't make
self-justifying speeches about how humans had their time, but now,
like the dinosaur, have become obsolete. Guaranteed. Only Evil
Hollywood AIs do that. "
> > Without direction, the intial goals of the AGI will be essentially random
>
> JESUS CHRIST! You actually think you must take Mr. Jupiter Brain by the
> hand and lead him to the path of enlightenment! There may be more
> ridiculous ideas, but it is beyond my feeble brain to imagine one.
The idea is that you program in the goal system *before* the AGI has
become a Jupiter Brain. Once the Jupiter Brain has already been built,
you're quite right, it would be hopeless if it wasn't properly
designed to begin with.
> > do you understand?
>
> NO, absolutely not. I DO NOT UNDERSTAND!
>
> "Robin Lee Powell" rlpowell@digitalkingdom.org
>
> > I suggest ceasing to feed the (probably unintentional) troll.
>
> If I am a troll then I should contact the Guinness Book Of World Records
> people, I think I could win the crown as the world's longest livening
> Internet troll; as I've been discussing these matters on this and many
> many other places on the net for well over 15 years.
Sorry, you lose. :) Mentifex has been around for longer, see
http://www.nothingisreal.com/mentifex_faq.html.
> John K Clark
> --
> John K Clark
> johnkclark@fastmail.fm
>
> --
>
> http://www.fastmail.fm - The way an email service should be
>
>
>
- Tom
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:01 MDT