From: Brian Atkins (brian@posthuman.com)
Date: Thu Jun 27 2002 - 18:44:48 MDT
James Higgins wrote:
>
> I would tend to worry very little if Ben was about to kick off a
> Singularity attempt, but I would worry very much if you, Eliezer, were.
That's quite odd since last I checked Ben wasn't even interested in
the idea of Friendliness until we invented it and started pointing out
to SL4 exactly how important it is. Not that it seems to have had much
effect since he still has no plans that I know of to alter his rather
dramatically risky seed AI experimentation protocol (basically not
adding any Friendliness features until /after/ he decides that the
AI has advanced enough) (he has a gut feel you see, and there's certainly
no chance of a hard takeoff, and even if it did he's quite sure it would
all turn out ok... trust him on it)
I'm still not quite sure how he gets away with engendering the kind of
sentiment I see above, I guess it is because we go to the effort to
put our plans out for public review and he sits in with the rest of the
crowd picking them apart. At least we _have_ plans out for public
review.
How about we set July for picking Ben's plan apart. After all he is far
closer to completion (he claims) than anyone else, yet few people here
seem to have anywhere near as good a grasp of his ideas compared to
SIAI's.
Disclaimer: this post is not intended to start any kind of us vs. them
phenomena. It exists simply to point out a perceived important difference
in the amount of critical discussion regarding the two organizations' plans.
-- Brian Atkins Singularity Institute for Artificial Intelligence http://www.intelligence.org/
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT