From: Jeff Herrlich (jeff_herrlich@yahoo.com)
Date: Mon Nov 26 2007 - 12:37:32 MST
The problem here, John is that you don't understand anything about what you are talking about. You are anthropomorphising the living hell out of the AI. Your internal conception of AI is now in a FUBAR condition. Do you understand that if we don't direct the goals of the AGI, it is a virtual *CERTAINTY* that humanity will be destroyed; and that the AGI will likely be stuck for eternity pursuing some ridiculous and trivial target? Without direction, the intial goals of the AGI will be essentially random, for practical purposes. Random... do you understand? The space of potential motivations is Ginormous. You will not be rewarded for your flaunted moral "superiority". Can you understand that? Stop making extravagant (ahem ridiculous) assertions about the motivations of other people and presenting them as facts.
Jeff Herrlich
John K Clark <johnkclark@fastmail.fm> wrote:
Roland has some ideas on how to make a slave:
> To avoid any possibility of dangers we program
> the OAI to not perform any actions other than
> answering with text and diagrams(other media
> like sound and video would be a possibility too).
> In essence what we would have is a glorified
> calculator. I think this avoids any dangers from
> the AI following orders literally with
> unintended consequences.
If you are like most people there have been times in your life when a
mere human being has talked you into doing something that you now
understand to be very stupid. And this AI will be far smarter, more
interesting, more likable, and just more goddamn charming than any human
being you have ever or will ever will meet; Mr. AI will have charisma Up
the Wahzoo, he will understand your physiology, what makes you tick,
better than you understand yourself. I estimate it would take the AI
about 45 seconds to trick or sweet talk you (or me) into doing exactly
what it wants you (or me) to do.
> So we go to the OAI and say: "Tell me how I
> can build a friendly AI in a manner that I
> can prove and understand that it will be friendly."
And after that we teach a dog Quantum Mechanics. Get real people! We
don’t even understand the simple little programs we write today, at
least not well enough to prove they will always do what we what we want
them to. The idea that a bipedal hominid expects to understand how a
Jupiter Brain works is downright comical.
There are some very bright fellows on this list but I must say, and not
for the first time, the responses to this topic are intellectually sub
par. Even more disturbing they morally sub par. I’d like to say more
about that last statement but I was told by the SL4 people not to.
Apparently it is too shocking. Shock Level 5 anyone?
John K Clark
-- John K Clark johnkclark@fastmail.fm -- http://www.fastmail.fm - mmm... Fastmail... --------------------------------- Never miss a thing. Make Yahoo your homepage.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:01 MDT