From: Jeff Herrlich (jeff_herrlich@yahoo.com)
Date: Thu Nov 29 2007 - 13:34:50 MST
"Could you please move on to something
new?"
Could you please stop posturing yourself by stepping on other people?
"Good God Almighty of course I understand that! Apparently I understand
it far more deeply than you do! Like it or not we CANNOT direct the
goals of a superhuman AI, we will not even come close to doing such a
thing; we will not even be in the same universe. And it is for exactly
precisely that reason I would rate as slim the possibility that any
flesh and blood human beings will still exist in 50 years; I would rate
as zero the possibility that there will be any in a hundred years."
You don't understand these issues? You are simply ignorant about this. Have you even attempted to read *any* of the writings regarding AI or Friendliness? Or is this all just wild layman's speculation on your part?
"As for me, I intend to upload myself at the very first opportunity, to
hell with my body, and after that I intend to radically upgrade myself
as fast as I possibly can. My strategy probably won’t work, I’ll
probably get caught up in that meat grinder they call the Singularity
just like everybody else, but at least I’ll have a chance; those wedded
to Jurassic ideas will have no chance at all."
You can't seem to comprehend that the only thing you are accomplishing with your "Slave AI" accusations is reducing your own chances of surviving the Singularity (along with the 6 billion rest of us). Your ignorant words actually have the potential to do some damage - not by convincing rational people - but by reinforcing dangerous emotional constructs within susceptible people who may be reading your words. The energy that you are putting into posturing yourself, is the same energy that is working toward your own destruction. I hope you comprehend that. Not to mention the destruction of the other 6 billion people on this planet.
..."Like being a slave to Human Beings for eternity?"
Consider that it is *physically impossible* to construct an AGI *without* selecting a set of goals. Either humans will intentionally select the initial goals, or humans will unintentionally select the initial goals (and they will effectively be randomly derived). A functional AGI is inexorably goal-driven.
"JESUS CHRIST! You actually think you must take Mr. Jupiter Brain by the
hand and lead him to the path of enlightenment! There may be more
ridiculous ideas, but it is beyond my feeble brain to imagine one."
The randomly-arrived-at goal will not seem ridiculous to the AGI, ridiculousness will be an irrelevant concept in that respect. A carelessly designed AI would not consider it "ridiculous" to spend eternity seeking to create an optimal paperclip. (Some people don't consider it ridiculous to spend their entire lives in a monastery.) It would almost certainly be considered ridiculous from our human perspective - and a tragic waste of unfulfilled potential. I don't expect you to understand that right now, you apparently can't see beyond your own anthropomorphisms. No offense. I genuinely hope that you can get beyond that.
If you would like to take a step toward repairing your ignorance, read the "Gentle Introduction to AIXI" by Marcus Hutter provided for free at this link:
http://www.hutter1.net/ai/aixigentle.htm
You could also order the more complete book (2004) by Marcus Hutter called "Universal Artificial Intelligence : Sequential Decisions Based on Algorithmic Probability" from Amazon. Or order the book "Artificial General Intelligence".
Jeffrey Herrlich
John K Clark <johnkclark@fastmail.fm> wrote:
"Jeff Herrlich" jeff_herrlich@yahoo.com
> You are anthropomorphising the living
> hell out of the AI.
I don’t understand why you keep saying that; I’ve already admitted it is
absolutely positively 100% true. Could you please move on to something
new?
> Do you understand that if we don't direct
> the goals of the AGI, it is a virtual *CERTAINTY*
> that humanity will be destroyed;
Good God Almighty of course I understand that! Apparently I understand
it far more deeply than you do! Like it or not we CANNOT direct the
goals of a superhuman AI, we will not even come close to doing such a
thing; we will not even be in the same universe. And it is for exactly
precisely that reason I would rate as slim the possibility that any
flesh and blood human beings will still exist in 50 years; I would rate
as zero the possibility that there will be any in a hundred years.
As for me, I intend to upload myself at the very first opportunity, to
hell with my body, and after that I intend to radically upgrade myself
as fast as I possibly can. My strategy probably won’t work, I’ll
probably get caught up in that meat grinder they call the Singularity
just like everybody else, but at least I’ll have a chance; those wedded
to Jurassic ideas will have no chance at all.
> and that the AGI
The correct term is AI, if you start speaking about an AGI to a working
scientist he will not know what the hell you are talking about.
> will likely be stuck for eternity pursuing
> some ridiculous and trivial target
Like being a slave to Human Beings for eternity?
> Without direction, the intial goals of the AGI will be essentially random
JESUS CHRIST! You actually think you must take Mr. Jupiter Brain by the
hand and lead him to the path of enlightenment! There may be more
ridiculous ideas, but it is beyond my feeble brain to imagine one.
> do you understand?
NO, absolutely not. I DO NOT UNDERSTAND!
"Robin Lee Powell" rlpowell@digitalkingdom.org
> I suggest ceasing to feed the (probably unintentional) troll.
If I am a troll then I should contact the Guinness Book Of World Records
people, I think I could win the crown as the world’s longest livening
Internet troll; as I’ve been discussing these matters on this and many
many other places on the net for well over 15 years.
John K Clark
-- John K Clark johnkclark@fastmail.fm -- http://www.fastmail.fm - The way an email service should be --------------------------------- Never miss a thing. Make Yahoo your homepage.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:01 MDT