From: Olie L (neomorphy@hotmail.com)
Date: Wed Feb 15 2006 - 03:43:57 MST
>From: "Tyler Emerson" <emerson@intelligence.org>
>To: "'pdugan'" <pdugan@vt.edu>, "'Olie L'" <neomorphy@hotmail.com>
>Subject: RE: Singularity Institute: Likely to win the race to build GAI?
>Date: Tue, 14 Feb 2006 19:57:36 -0800
>
>Our chief goal is to create Friendly AI through our own project. Acquiring
>the funding and researchers to sustain an eight- to ten-person team is
>challenging but sufficiently achievable. I am not against influencing other
>projects, but that is much less optimal, based on my present assessment. I
>don't see enough appreciation from other projects on how *hard* it will be
>to achieve Friendly AI, and how critical it is to have a mathematical
>understanding of Friendly AI *before* building AGI. I don't know why this
>is
>so hard for most projects to understand. Based on my present understanding,
>AGI projects are playing with fire the likes of which the world has never
>seen, and I haven't seen a sufficient appreciation of this. The Institute
>must find and develop brilliant researchers one individual at a time. We're
>presently looking for our second full-time Research Fellow. If and when we
>find that person, we'll be in a stronger position to find the 3rd and 4th.
>
>TE
>
> > -----Original Message-----
> > From: pdugan [mailto:pdugan@vt.edu]
> > Sent: Tuesday, February 14, 2006 7:31 PM
> > To: emerson; Olie L
> > Cc: pdugan@vt.edu
> > Subject: RE: Singularity Institute: Likely to win the race to build GAI?
> >
> > I think you have a good idea of what SIAI's role could be, though I
> > suppose
> > Tyler should corroborate (I'm only affilied as a volunteer and not much
>of
> > an
> > authority). I remember Goertzel saying something about Eliezer's
>writings,
> > how
> > they made him take the Friendliness problem more seriously. I think the
> > Institute could operate in the mediatory way you describe without
> > requiring
> > the fifty million dollar budget needed for build an AGI themselves. I
> > think it
> > would be most benificial for SIAI to gear themselves toward fostering an
> > underlying forum for ensuring Friendliness in the AGI community.
> >
> > Now, though this should definetly be a role the institute plays, I can't
> > say
> > whether it would be a primary or secondary role, Eliezer seems very
> > commited
> > toward engineering an AGI himself.
> >
> > Patrick
> >
> > >===== Original Message From Olie L <neomorphy@hotmail.com> =====
> > >Hi Patrick, Tyler
> > >
> > >I'd like to bounce this off you, first - Could you check and verify
>that
> > I'm
> > >not just reiterating lame info or stepping over any lines of
> > >appropriateness? Thankye...
> > >
> > >As I see it, the SIAI's stated goals of
> > >
> > > the "advancement of beneficial artificial intelligence and ethical
> > >cognitive enhancement"
> > >
> > >can be perfectly well achieved by having other institutions "win the
> > race"
> > >to AGI.
> > >
> > >Their role is not necessarily to be the most advanced group on the path
> > >toward an AGI implementation. Their role -as I see it - is to work
> > towards
> > >the creation of beneficial AI. That is different from creating
> > beneficial
> > >AI themselves.
> > >
> > >Many other NGOs have created powerful positions for themselves, where
> > they
> > >can work with commercial institutions to achieve their stated goals. A
> > good
> > >example is the RSPCA (Royal Soc. for Prevention of Cruelty to Animals)
>-
> > >which has become (1) a de-facto enforcer of government legislation (2)
>An
> > >powerful lobby for creating government legislation (3) an operation
>that
> > >directly provides shelter to some animals (4) An organisation that
>works
> > >collaboratively with many businesses.
> > >
> > >Not only do businesses provide the RSPCA with resources and money, they
> > also
> > >engage in joint projects and give them unusual access to commercially
> > >sensitive information.
> > >
> > >As a non-profit organisation, the Institute may often be given far more
> > >access to proprietary information than other groups - such as
> > universities
> > >or even investors. Such access relies on the Institute developing an
> > >appropriate reputation, including having the right skills to be able to
> > >provide consultative services.
> > >
> > >No commercial AGI project would want to create an unfriendly AI. It is
> > >against their interests to do so.
> > >
> > >If a business believes that the Institute can provide a service - such
>as
> > >improving the friendliness of the business's AI project - there is a
> > strong
> > >incentive to work with the Institute, advancing the Institute's goals.
> > >
> > >As far as I can see, by carving out a niche for itself - FAI theory -
>the
> > >Institute has already done much to advance its reputation. Even if its
> > own
> > >projects are not the furthest towards demonstrating AGI potential, any
> > >efforts will hopefully assist in improving the Institute's FAI
>expertise.
> > >If there are demonstrable successes, these will also greatly advance
>the
> > >Institute's reputation.
> > >
> > >I would like to see the Institute expand its capacity to provide
> > >consultative services. This is only my opinion. But it has already
>had
> > >substantial influence on a number of projects other than those of its
> > staff.
> > > Let us hope that more AGI projects will take their advice.
> > >
> > >-- Olie
> > >
> > >
> > >
> > >>From: pdugan <pdugan@vt.edu>
> > >>Reply-To: sl4@sl4.org
> > >>To: sl4 <sl4@sl4.org>
> > >>CC: pdugan@vt.edu
> > >>Subject: RE: Singularity Institute: Likely to win the race to build
>GAI?
> > >>Date: Tue, 14 Feb 2006 18:25:04 -0500
> > >>
> > >>Well I'd say its worth evaluating the prospective Friendliness of
>these
> > >>systems, for the obvious reasons. This is probably fairly difficult to
> > do,
> > >>particularly for projects based on proprietary information. I think a
> > >>useful
> > >>hueristic when gauging the risks associated with an AGI is to evaluate
> > the
> > >>likelyhood of a hard take-off. From what I gather about Novaemente,
>you
> > >>seem
> > >>to see soft take-off as much more likely. If Novamente does prove
>robust
> > >>enough to be deemed a "general intelligence" would it possible for
> > someone
> > >>else, possibly SIAI, to conceive of a more "powerful" system that
> > enganges
> > >>in
> > >>hard take-off while Novamente spends its "childhood"? Or one the other
> > >>hand,
> > >>what sort of Friendliness constraints does Novamente possess?
> > >>
> > >> Patrick
> > >>
> > >> >===== Original Message From ben@goertzel.org =====
> > >> >In fact I know of a number of individuals/groups in addition to
>myself
> > >> >who fall into this category (significant progress made toward
> > >> >realizing a software implementation whose design has apparent AGI
> > >> >potential), though I'm not sure which of them are list members.
> > >> >
> > >> >In addition to my Novamente project (www.novamente.net), I would
> > >> >mention Steve Omohundro
> > >> >
> > >> >http://home.att.net/~om3/selfawaresystems.html
> > >> >
> > >> >(who is working on a self-modifying AI system using his own variant
>of
> > >> >Bayesian learning) and James Rogers with his
> > >> >algorithmic-information-theory related AGI design (James is a list
> > >> >member, but his work has been kept sufficiently proprietary that I
> > >> >can't say much about it). There are many others as well...
> > >> >
> > >> >Based on crude considerations, it would seem SIAI is nowhere near
>the
> > >> >most advanced group on the path toward an AGI implementation. On
>the
> > >> >other hand, it's of course possible that those of us who are
>"further
> > >> >along" all have wrong ideas (though I doubt it!) and SIAI will come
>up
> > >> >with the right idea in 2008 or whenever and then proceed rapidly
> > >> >toward the end goal.
> > >> >
> > >> >ben
> > >> >ben
> > >> >
> > >> >On 2/14/06, pdugan <pdugan@vt.edu> wrote:
> > >> >> There is a certain list member who already has an AGI model more
> > than
> > >>half
> > >> >> implemented, making it a few years from testablility to see if it
> > >>classifies
> > >> >> as a genuine AGI, and if so then maybe another half a decade
>before
> > >>something
> > >> >> like recursive self-improvement becomes possible.
> > >> >>
> > >> >> Patrick
> > >> >>
> > >> >> >===== Original Message From P K <kpete1@hotmail.com> =====
> > >> >> >>Yes, I know that they are working on _Friendly_ GAI. But my
> > question
> > >>is:
> > >> >> >>What reason is there to think that the Institute has any real
> > chance
> > >>of
> > >> >> >>winning the race to General Artificial Intelligence of any sort,
> > >>beating
> > >> >> >>out those thousands of very smart GAI researchers?
> > >> >> >>
> > >> >> >There is no particular reason(s) I can think of that make the
> > >>Institute
> > >>more
> > >> >> >likely to develop AGI than any other organization with skilled
> > >>developers.
> > >> >> >It's all a fog. The only way to see if their ideas have any merit
> > is
> > >>to
> > >>try
> > >> >> >them out. Also, I suspect their donations would increase if they
> > >>showed
> > >>some
> > >> >> >proofs of concept. It's all speculative at this point.
> > >> >> >
> > >> >> >As for predicting success or failure, the best calibrated answer
>is
> > to
> > >> >> >predict failure to anyone attempting to build a GAI. You would be
> > >>right
> > >>most
> > >> >> >of the time and wrong probably only once or right all the time (o
> > >>dear,
> > >> >> >heresy).
> > >> >> >
> > >> >> >That doesn't mean it isn't worth trying. By analogy, think of AGI
> > >>developers
> > >> >> >as individual sperm trying to reach the egg. The odds of any
> > >>individual
> > >>are
> > >> >> >incredibly small but the reward is so good it would be a shame
>not
> > to
> > >>try.
> > >> >> >Also, FAI has to be developed only once for all to benefit.
> > >> >> >
> > >> >> >_________________________________________________________________
> > >> >> >MSN(r) Calendar keeps you organized and takes the effort out of
> > >>scheduling
> > >> >> >get-togethers.
> > >> >>
> > >>
> > >http://join.msn.com/?pgmarket=en-
> > ca&page=byoa/prem&xAPID=1994&DI=1034&SU=http
> > >> >> ://hotmail.com/enca&HL=Market_MSNIS_Taglines
> > >> >> > Start enjoying all the benefits of MSN(r) Premium right now and
> > get
> > >>the
> > >> >> >first two months FREE*.
> > >> >>
> > >> >>
> > >> >>
> > >>
> > >>
>
>
>
This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:23:29 MST