From: pdugan (pdugan@vt.edu)
Date: Tue Feb 14 2006 - 16:25:04 MST
Well I'd say its worth evaluating the prospective Friendliness of these
systems, for the obvious reasons. This is probably fairly difficult to do,
particularly for projects based on proprietary information. I think a useful
hueristic when gauging the risks associated with an AGI is to evaluate the
likelyhood of a hard take-off. From what I gather about Novaemente, you seem
to see soft take-off as much more likely. If Novamente does prove robust
enough to be deemed a "general intelligence" would it possible for someone
else, possibly SIAI, to conceive of a more "powerful" system that enganges in
hard take-off while Novamente spends its "childhood"? Or one the other hand,
what sort of Friendliness constraints does Novamente possess?
Patrick
>===== Original Message From ben@goertzel.org =====
>In fact I know of a number of individuals/groups in addition to myself
>who fall into this category (significant progress made toward
>realizing a software implementation whose design has apparent AGI
>potential), though I'm not sure which of them are list members.
>
>In addition to my Novamente project (www.novamente.net), I would
>mention Steve Omohundro
>
>http://home.att.net/~om3/selfawaresystems.html
>
>(who is working on a self-modifying AI system using his own variant of
>Bayesian learning) and James Rogers with his
>algorithmic-information-theory related AGI design (James is a list
>member, but his work has been kept sufficiently proprietary that I
>can't say much about it). There are many others as well...
>
>Based on crude considerations, it would seem SIAI is nowhere near the
>most advanced group on the path toward an AGI implementation. On the
>other hand, it's of course possible that those of us who are "further
>along" all have wrong ideas (though I doubt it!) and SIAI will come up
>with the right idea in 2008 or whenever and then proceed rapidly
>toward the end goal.
>
>ben
>ben
>
>On 2/14/06, pdugan <pdugan@vt.edu> wrote:
>> There is a certain list member who already has an AGI model more than half
>> implemented, making it a few years from testablility to see if it
classifies
>> as a genuine AGI, and if so then maybe another half a decade before
something
>> like recursive self-improvement becomes possible.
>>
>> Patrick
>>
>> >===== Original Message From P K <kpete1@hotmail.com> =====
>> >>Yes, I know that they are working on _Friendly_ GAI. But my question is:
>> >>What reason is there to think that the Institute has any real chance of
>> >>winning the race to General Artificial Intelligence of any sort, beating
>> >>out those thousands of very smart GAI researchers?
>> >>
>> >There is no particular reason(s) I can think of that make the Institute
more
>> >likely to develop AGI than any other organization with skilled developers.
>> >It's all a fog. The only way to see if their ideas have any merit is to
try
>> >them out. Also, I suspect their donations would increase if they showed
some
>> >proofs of concept. It's all speculative at this point.
>> >
>> >As for predicting success or failure, the best calibrated answer is to
>> >predict failure to anyone attempting to build a GAI. You would be right
most
>> >of the time and wrong probably only once or right all the time (o dear,
>> >heresy).
>> >
>> >That doesn't mean it isn't worth trying. By analogy, think of AGI
developers
>> >as individual sperm trying to reach the egg. The odds of any individual
are
>> >incredibly small but the reward is so good it would be a shame not to try.
>> >Also, FAI has to be developed only once for all to benefit.
>> >
>> >_________________________________________________________________
>> >MSN(r) Calendar keeps you organized and takes the effort out of scheduling
>> >get-togethers.
>>
>http://join.msn.com/?pgmarket=en-ca&page=byoa/prem&xAPID=1994&DI=1034&SU=http
>> ://hotmail.com/enca&HL=Market_MSNIS_Taglines
>> > Start enjoying all the benefits of MSN(r) Premium right now and get the
>> >first two months FREE*.
>>
>>
>>
This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:23:29 MST