From: Michael Wilson (mwdestinystar@yahoo.co.uk)
Date: Sat Dec 03 2005 - 01:49:47 MST
Herb Martin wrote:
>> ...it's fairly futile to try and evaluate what
>> wildly transhuman intelligences can and can't do
>
> Exactly.
>
> After the Singularity we have no real hope of predicting
> friendliness -- or knowing what initial conditions will
> necessarily favor such.
You're making a beginner mistake: you're confusing the ability
to predict what an intelligence will /do/, with the ability
to predict what it will /desire/. If we could predict exactly
what an AGI will actually do then it wouldn't have transhuman
intelligence. Fortunately predicting what the goals of an
AGI system will be, including the effects of self-modification,
is a much more tractable (though very hard) endeavour.
> Beyond the singularity conditions is unknowable territory
> (thus the name), and preceding the Singularity are competing
> groups of human beings with different goals and ideas of
> friendliness.
The whole idea of Eliezer's CV proposal is to produce an end
result that is effectively the best compromise that everyone
would agree on, if everyone was vastly more intelligent. This
may or may not actually work, but it's worth trying as the
'best possible' answer to 'who's FAI do we implement' question.
Failing that, the question comes down to the judgement of
whoever actually builds the first seed AI, so I hope whoever it
is manages to instantiate a world not too disagreeable to us.
* Michael Wilson
___________________________________________________________
Yahoo! Model Search 2005 - Find the next catwalk superstars - http://uk.news.yahoo.com/hot/model-search/
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:53 MDT