From: Joseph Henry (josephjah@gmail.com)
Date: Fri Mar 13 2009 - 22:34:16 MDT
On Fri, Mar 13, 2009 at 5:57 AM, Roko Mijic <rmijic@googlemail.com> wrote:
> I'm still not certain, but I think that AI/AGI research is the most
> fulfilling thing for me to do with the next 6 years of my life.
>
> Do remember that AGI research is a very hard scientific problem. This is
> both a good thing and a bad thing: it is good because it is a very
> interesting opponent to fight against; even if you fail, you will have
> "lived" more than if you'd succeeded at some boring, run of the mill piece
> of almost-settled-science.
>
> It is a bad thing because you will fail a lot, and this will be
> disheartening. Also, it has a poor intellectual reputation at the moment,
> though this is improving.
>
> Lastly, there is the issue of impact upon the future of humanity. This
> again is a double-edged sword. It is good because you get to feel you are
> doing something really important, and if you are part of an effort that
> succeeds in creating a positive singularity, not only will you live forever
> in a very nice world, but you will also be a hero for the rest of the age of
> the universe, a kind of eternal celebrity. It is bad because the human mind
> (at least my mind) finds it hard to cope with the immense cognitive
> dissonance that is created by this weight of responsibility, and the
> implication that there is a significant chance that the human race will be
> wiped out by someone's uFAI project. Also, merely contemplating the size of
> the stakes (both the reward for success, and the penalty for failure) makes
> you think that you are insane. I have found existentialist philosophy to be
> helpful in this respect: humans must strive to create meaning in their
> meaningless universe, and pursuing a mad-sounding but potentially universe
> saving idea *does* creating meaning, even if the idea really is mad.
>
> cc: SL4 list, because others may want to read this advice, and/or comment
>
It does have a poor intellectual reputation, but to offer a small piece of
advice (coming from a budding undergraduate), break the AGI problem into
smaller narrow-AI-ish pieces, then attack them one by one, or all slowly in
unison under the guise of more practical applications. I am currently
building a pattern recognition engine to predict stock prices for an
independent study at my university, but the real motive is to take the end
product and chuck it into my metaphorical bin of AGI parts.
As for the impact on humanity, and the universe (as if they are separate
entities), I totally agree, imagining the future impact is mind-bendingly
bizarre and difficult at the same time, it's a lot of responsibility.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:04 MDT