Re: Singularity Institute: Likely to win the race to build GAI?

From: Charles D Hixson (charleshixsn@earthlink.net)
Date: Tue Feb 14 2006 - 19:26:27 MST


On Tuesday 14 February 2006 05:23 am, Joshua Fox wrote:
> Yes, I know that they are working on _Friendly_ GAI. But my question is:
> What reason is there to think that the Institute has any real chance of
> winning the race to General Artificial Intelligence of any sort, beating
> out those thousands of very smart GAI researchers?

Their chances are better if they try than if they don't. The possible
benefits if they succeed are quite large. The possible costs if the first AI
out of the bottle is unfriendly are quite high.

If you want to bet on another group to produce a friendly AI, go for it. The
chances of any one group are quite small. (And in my estimation, the actual
initial AI will be unintentional, and rather task focused. I.e., not have
instincts that will cause it to attempt to take over everything. But what if
I'm wrong? And if I'm right, what about the second one?)

We have the best chances for survival if the first AI is friendly, and has
instincts that attempt to ensure that any dominant AI will also be friendly.

>
> Though it might be a very bad thing for nonFriendly GAI to emerge first,
> it seems to me by far more likely for someone else --there are a lot of
> smart people out there -- to beat the Institute to the goal of GAI. And
> if so, perhaps the Institute needs to put all its resources into
> researching and evangelizing Friendliness, then teaming up with the
> world's leading GAI researchers -- whether at MIT, Stanford, or wherever
> they are -- to add Friendliness to their development program.
>
> Joshua

No. Most of those working on AIs are working on task focused AIs, and will
not be susceptible to evangelizing. Therefore, while it's reasonable for
them to devote some amount of time and effort to "spreading the word", this
amount should be strictly limited. Most of it should be left to those who
AREN'T working on building an AI.

>
> Thomas Buckner wrote:
> > --- Joshua Fox <joshua@joshuafox.com> wrote:
> >> The writings at intelligence.org have made quite an
> >> impression on me.
> >>
> >> Though I am no expert, it appears to me that
> >> the Institute is a thought
> >> leader in the definition and the creation of
> >> FAI.
> >>
> >> But let me ask: Why does the Institute believe
> >> that it has a reasonable
> >> chance of leading the world in the construction
> >> of a GAI?
> >
> > What SIAI in general and Eliezer in particular
> > are focused on is not merely making a GAI but
> > making a Friendly one, that won't extinct,
> > enslave, stultify or otherwise ruin us. It's that
> > simple. By analogy, there are fifty groups trying
> > to build a car, but who else is trying to develop
> > brakes, belts, and airbags?
> >
> > Tom Buckner
> >
> > __________________________________________________
> > Do You Yahoo!?
> > Tired of spam? Yahoo! Mail has the best spam protection around
> > http://mail.yahoo.com



This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:23:29 MST