Re: Singularity Institute: Likely to win the race to build GAI?

From: Ben Goertzel (ben@goertzel.org)
Date: Tue Feb 14 2006 - 17:37:06 MST


Hi,

About Novamente and "hard takeoff"

I find the hard vs. soft takeoff terminology somewhat imprecise. Or, at
least, it may be precise as used by some individuals, but some individuals
use it imprecisely, and among those who use it precisely, there is not that
much consistency in the meaning.

I think that a hard takeoff could occur with a Novamente system -- but only
once the system had achieved a certain, clearly recognizable, point of
maturity.

There will be a period of "soft", gradual cognitive development as we raise
the system through its virtual toddlerhood and childhood. At this stage it
will not have strong self-modifying capabilities, nor enough understanding
of computing, mathematics and cognitive science to usefully revise its own
code. Hard takeoff in this phase is extremely unlikely.

Then (assuming we are comfortable with the related, difficult Friendliness
issues -- but that's another topic, which I won't digress to in this
message) there will be a point at which we allow the system to modify its
own source-code, once it understands a lot of supporting science. After
this point, a hard takeoff is possible at any time... though certainly not
guaranteed. We really don't have enough knowledge right now to say how fast
progress will be at such a stage.

-- Ben G

On 2/14/06, pdugan <pdugan@vt.edu> wrote:
>
> Well I'd say its worth evaluating the prospective Friendliness of these
> systems, for the obvious reasons. This is probably fairly difficult to do,
> particularly for projects based on proprietary information. I think a
> useful
> hueristic when gauging the risks associated with an AGI is to evaluate the
> likelyhood of a hard take-off. From what I gather about Novaemente, you
> seem
> to see soft take-off as much more likely. If Novamente does prove robust
> enough to be deemed a "general intelligence" would it possible for someone
> else, possibly SIAI, to conceive of a more "powerful" system that enganges
> in
> hard take-off while Novamente spends its "childhood"? Or one the other
> hand,
> what sort of Friendliness constraints does Novamente possess?
>
> Patrick
>
> >===== Original Message From ben@goertzel.org =====
> >In fact I know of a number of individuals/groups in addition to myself
> >who fall into this category (significant progress made toward
> >realizing a software implementation whose design has apparent AGI
> >potential), though I'm not sure which of them are list members.
> >
> >In addition to my Novamente project (www.novamente.net), I would
> >mention Steve Omohundro
> >
> >http://home.att.net/~om3/selfawaresystems.html
> >
> >(who is working on a self-modifying AI system using his own variant of
> >Bayesian learning) and James Rogers with his
> >algorithmic-information-theory related AGI design (James is a list
> >member, but his work has been kept sufficiently proprietary that I
> >can't say much about it). There are many others as well...
> >
> >Based on crude considerations, it would seem SIAI is nowhere near the
> >most advanced group on the path toward an AGI implementation. On the
> >other hand, it's of course possible that those of us who are "further
> >along" all have wrong ideas (though I doubt it!) and SIAI will come up
> >with the right idea in 2008 or whenever and then proceed rapidly
> >toward the end goal.
> >
> >ben
> >ben
> >
> >On 2/14/06, pdugan <pdugan@vt.edu> wrote:
> >> There is a certain list member who already has an AGI model more than
> half
> >> implemented, making it a few years from testablility to see if it
> classifies
> >> as a genuine AGI, and if so then maybe another half a decade before
> something
> >> like recursive self-improvement becomes possible.
> >>
> >> Patrick
> >>
> >> >===== Original Message From P K <kpete1@hotmail.com> =====
> >> >>Yes, I know that they are working on _Friendly_ GAI. But my question
> is:
> >> >>What reason is there to think that the Institute has any real chance
> of
> >> >>winning the race to General Artificial Intelligence of any sort,
> beating
> >> >>out those thousands of very smart GAI researchers?
> >> >>
> >> >There is no particular reason(s) I can think of that make the
> Institute
> more
> >> >likely to develop AGI than any other organization with skilled
> developers.
> >> >It's all a fog. The only way to see if their ideas have any merit is
> to
> try
> >> >them out. Also, I suspect their donations would increase if they
> showed
> some
> >> >proofs of concept. It's all speculative at this point.
> >> >
> >> >As for predicting success or failure, the best calibrated answer is to
> >> >predict failure to anyone attempting to build a GAI. You would be
> right
> most
> >> >of the time and wrong probably only once or right all the time (o
> dear,
> >> >heresy).
> >> >
> >> >That doesn't mean it isn't worth trying. By analogy, think of AGI
> developers
> >> >as individual sperm trying to reach the egg. The odds of any
> individual
> are
> >> >incredibly small but the reward is so good it would be a shame not to
> try.
> >> >Also, FAI has to be developed only once for all to benefit.
> >> >
> >> >_________________________________________________________________
> >> >MSN(r) Calendar keeps you organized and takes the effort out of
> scheduling
> >> >get-togethers.
> >>
> >
> http://join.msn.com/?pgmarket=en-ca&page=byoa/prem&xAPID=1994&DI=1034&SU=httpgularity Institute: Likely to win the race to build GAI?
> >> ://hotmail.com/enca&HL=Market_MSNIS_Taglines
> >> > Start enjoying all the benefits of MSN(r) Premium right now and get
> the
> >> >first two months FREE*.
> >>
> >>
> >>
>
>
>



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:55 MDT