Re: Summary of current FAI thought

From: Robin Lee Powell (rlpowell@digitalkingdom.org)
Date: Fri Jun 04 2004 - 23:37:53 MDT


On Fri, Jun 04, 2004 at 10:03:48PM -0700, Samantha Atkins wrote:
>
> On Jun 1, 2004, at 1:24 PM, Eliezer Yudkowsky wrote:
> >
> >For our purposes (pragmatics, not AI theory) FAI is a special case of
> >seed AGI. Seed AGI without having solved the Friendliness problem
> >seems to me a huge risk, i.e., seed AGI is the thing most likely to
> >kill off humanity if FAI doesn't come first. If a non-F seed AGI
> >goes foom, that's it, game over.
>
> I have heard you say this many times. However, it is not certain
> that a non-F seed AGI going "foom" would kill off humanity. At least
> it isn't to me.

Remind me not to support any AI projects that you are coding on.

In the mean time, allow me to suggest that you read up on the Riemann
(sp?) hypothesis failure scenario, and/or read the online book "The
Metamorphosis of Prime Intellect".

-Robin

-- 
http://www.digitalkingdom.org/~rlpowell/  ***  I'm a *male* Robin.
"Many philosophical problems are caused by such things as the simple
inability to shut up." -- David Stove, liberally paraphrased.
http://www.lojban.org/  ***  loi pimlu na srana .i ti rokci morsi


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT