From: Maru (firstname.lastname@example.org)
Date: Sat Oct 23 2004 - 21:44:43 MDT
If what I ask has been beaten to death already, then please
someone direct me to the appropriate archive threads.
But Ben, what I asked was basically what part of FAI is least
supported by evidence, and taken most on faith? (I didn't want to
say it so baldly because it sounds insulting.) Should I
understand you to believe that Eliezar may have some nice
theories but has yet to produce code and validate himself?
--- Ben Goertzel <email@example.com> wrote:
> This stuff has been gone over many times in the archives. To
> sum up my own
> 1) I believe that working out a truly self-modifying,
> superintelligent AI
> system is a hard problem but a solvable one. I think I have a
> solution, which however will take some years to be completed
> http://www.realai.net/AAAI04.pdf, or www.agiri.org). So far as
> I can tell
> Eliezer does not yet have a viable solution, though he may well
> come up with
> one in the future.
> 2) About proving whether Friendliness can be guaranteed or not.
> Science is
> not at the level now where this kinda thing can be *proved*,
> but every
> indication is that Friendliness CANNOT be guaranteed. There is
> not a shred
> of evidence, intuitive or mathematical or scientific, that it
> can be
> guaranteed for any superintelligent AI system.
Do you Yahoo!?
Declare Yourself - Register online to vote today!
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:49 MDT