[sl4] Re: studying at the School of Informatics in University of Edinburgh

From: Roko Mijic (rmijic@googlemail.com)
Date: Fri Mar 13 2009 - 08:59:44 MDT

2009/3/13 Rui Costa <racosta@student.dei.uc.pt>

> I agree with you and I think that the good side surpasses a lot the bad
> one.

Well, it's a weighty matter. Doing AGI/FAI research is undoubtedly a risk to
your scientific career. If you like a quiet life... then it probably isn't
worth it. If, like me, you have always been an iconoclast, and always wanted
to question things, to find out what is really important in science, then it
is the perfect challenge!

> I also want to do research in AI/AGI with a strong brain science basis.
> Therefore I believe that one of the best options in this moment of my career
> is to enter in a doctoral program about neuroinformatics and computational
> neuroscience (like the one in Edinburgh). With this background I will be
> prepared to perform research about neuroscience and artificial intelligence
> in projects that attempt to create true intelligence.

Sure, you could do this. My opinion is that learning neuroscience is a waste
of your time. You'd be better off to learn the functional input output
behavior of the human brain (this is *cognitive science*) than learning
about the internal wiring. Read Minsky's emotion machine to find out about

Another great resource is Josh Tenebaum on hierarchical Bayesian models for
structure learning, which he relates to the learning behavior of infants.
Again, this is cognitive science, not neuroscience.

A third source of inspiration is Ben Goertzel's general theory of general
AI, and Shane and Marcus' agent/environment interaction framework.

Others will probably disagree with me...

> You said that you will do research in AI/AGI, but more specifically where
> and about what?
> Best regards,
> Rui P. Costa
> On Fri, Mar 13, 2009 at 1:57 PM, Roko Mijic <rmijic@googlemail.com> wrote:
>> I'm still not certain, but I think that AI/AGI research is the most
>> fulfilling thing for me to do with the next 6 years of my life.
>> Do remember that AGI research is a very hard scientific problem. This is
>> both a good thing and a bad thing: it is good because it is a very
>> interesting opponent to fight against; even if you fail, you will have
>> "lived" more than if you'd succeeded at some boring, run of the mill piece
>> of almost-settled-science.
>> It is a bad thing because you will fail a lot, and this will be
>> disheartening. Also, it has a poor intellectual reputation at the moment,
>> though this is improving.
>> Lastly, there is the issue of impact upon the future of humanity. This
>> again is a double-edged sword. It is good because you get to feel you are
>> doing something really important, and if you are part of an effort that
>> succeeds in creating a positive singularity, not only will you live forever
>> in a very nice world, but you will also be a hero for the rest of the age of
>> the universe, a kind of eternal celebrity. It is bad because the human mind
>> (at least my mind) finds it hard to cope with the immense cognitive
>> dissonance that is created by this weight of responsibility, and the
>> implication that there is a significant chance that the human race will be
>> wiped out by someone's uFAI project. Also, merely contemplating the size of
>> the stakes (both the reward for success, and the penalty for failure) makes
>> you think that you are insane. I have found existentialist philosophy to be
>> helpful in this respect: humans must strive to create meaning in their
>> meaningless universe, and pursuing a mad-sounding but potentially universe
>> saving idea *does* creating meaning, even if the idea really is mad.
>> cc: SL4 list, because others may want to read this advice, and/or comment
>> 2009/3/13 Rui Costa <racosta@student.dei.uc.pt>
>>> Hi,
>>> So what are your planes to the future?
>>> Do you want to do research in AGI?
>>> Best regards,
>>> Rui

Roko Mijic
MSc by Research
University of Edinburgh

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:04 MDT