From: Roko Mijic (rmijic@googlemail.com)
Date: Sat Feb 28 2009 - 15:57:16 MST
Thanks, Joshua.
I've been writing up a research proposal following a visit to IDSIA,
and the conclusions I have come to are that:
(1) Previous work within academia has failed to deliver because of
various social factors related to the warping influence that the
prospect of AGI has on the human mind, overconfidence, giving up, etc.
(2) we are fairly fortunate that this is the case, because no previous
projects had a Friendliness strategy, so if they had worked we might
well all be dead...
(3) Today there are about 100 people in the world working on AGI in a
vaguely serious way, as far as I can tell, and we shouldn't hold our
breath for AGI any time soon unless much more good quality research is
done. The number of implemented systems numbers about, well,
scratching the bottom of the barrel here, 4 that I can think of.
(Novamente, NARS, SOAR, ACT-R). For comparison, in my day job CS
research, there are about 50 implemented ontology mapping systems...
(4) This might be a good thing, because most of the people working in
AGI *STILL* don't have friendliness as a priority. The number of
people working on friendly AI (which I would define loosely as AGI
with human friendliness as an overriding consideration) in the world
is probably less than 20.
(5) It can't go on like this forever. At some point, we are going to
have to make things ship-shape in AGI, which means taking AGI from
email lists back into CS departments, and taking friendliness very
seriously. Some people (Vassar!) seem to think that the 20-or-so can
do the job on their own, and I doubt this. But on the other hand,
convincing mainstream academia to take FAI seriously is a very hard
problem too, though it brings with it a hefty advantage: academia has
lots of good people and lots of money.
Given the situation at the moment, I think that we are in for a long
fight if we are to get the high quality community, the level of
funding and manpower and the required farsightedness to bring about an
FAI. At some point, Eliezer made a comment on OB saying that the
people down in the AGI "dungeon" today weren't down there because they
were really good and they wanted a really tough problem to attack, but
because they were too stupid/too overconfident to see how hard the
problem actually is. Yes, I agree. It is a shame that this, the most
important scientific problem ever, is getting such little attention,
whilst talent and funding is thrown into black holes like ... black
hole research.
2009/2/28 Joshua Fox <joshua@joshuafox.com>:
> There has been little research into the theory of intelligence-in-general
> (non-anthropomorphic general intelligence) and recursive decision theory. (I
> know of the work of SIAI affiliates and Schmidhueber-Hutter-Legg. If
> there's more, I'd appreciate bibliography.)
> Adding an incremental contribution to the limited existing work (which is
> how most science is done), would be valuable in its own right and as a way
> of raising the profile of this area in academia. Depending on how far you
> go, this would not be revealing secrets, although I suppose that just
> increasing the size of the field could be a risk..
> If done right, the research could be connected to some existing field, at
> least to the point where publication is possible.
> Joshua
>
> On Mon, Feb 23, 2009 at 12:20 AM, Roko Mijic <rmijic@googlemail.com> wrote:
>>
>> Since I've been lurking in the h+/AGI community for a while without
>> reading SL4, I'd like to know what the general opinion of this
>> community is on FAI development within, or in collaboration with the
>> mainstream of academia.
>>
>> Now, the current situation is that there is at least a conference on
>> general intelligence, and a very small community of researchers doing
>> research on the subject of general AI.
>>
>> One way to hasten the development of FAI is for me to seek to do
>> research within academia. A disadvantage of this strategy is that
>> academia is an open community, and anyone can potentially look at the
>> results that the field is producing and use them to create uFAI.
>> Eliezer has outlined some other problems with academia in the
>> following SL4 post:
>>
>> http://www.sl4.org/archive/0410/10071.html
>>
>> Another possibility is for SIAI to seek to keep the most important
>> aspects of AGI development mostly secret.
>>
>> Is SIAI adopting this mode of operation (i.e. internal research)?
>>
>> This has the disadvantage that a small community of researchers will
>> be less creative and more susceptible to groupthink than the entire
>> international research community. "Closed innovation" vs. "Closed
>> innovation" comes to mind here:
>>
>> http://en.wikipedia.org/wiki/Open_innovation
>>
>> Now, I'm at a stage where I need to decide to what to do with my life,
>> so a bit of advice on this would be appreciated. Perhaps the list has
>> already discussed similar issues ("I want to help out with FAI
>> research, what do I do?" etc)
>>
>> Best,
>>
>> Roko
>>
>> --
>> Roko Mijic
>>
>> MSc by Research
>>
>> University of Edinburgh
>>
>> __________________________________________________
>> D O T E A S Y - "Join the web hosting revolution!"
>> http://www.doteasy.com
>
>
-- Roko Mijic MSc by Research University of Edinburgh
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:04 MDT