From: Roko Mijic (rmijic@googlemail.com)
Date: Fri Mar 13 2009 - 10:30:04 MDT
Sent from my iPhone
On 13 Mar 2009, at 15:42, tim@fungible.com (Tim Freeman) wrote:
> From: Roko Mijic <rmijic@googlemail.com>
>> I'm still not certain, but I think that AI/AGI research is the most
>> fulfilling thing for me to do with the next 6 years of my life.
> ...
>> Lastly, there is the issue of impact upon the future of humanity. ...
>> It is bad because the human mind (at least my mind) finds it hard to
>> cope with the immense cognitive dissonance that is created by this
>> weight of responsibility, and the implication that there is a
>> significant chance that the human race will be wiped out by someone's
>> uFAI project. Also, merely contemplating the size of the stakes (both
>> the reward for success, and the penalty for failure) makes you think
>> that you are insane.
>
> I've spent the last few years mostly doing things entirely unrelated
> with AGI research. In my experience, once you're aware of the issues,
> you'll pay the costs listed above whether you do constructive work on
> the issues or not. So they aren't really costs of working on the
> issues unless the alternative is to completely fill your attention
> with meaningless distractions so you don't think about what's really
> happening. Apparently I'm not up to that. I just can't party hard
> enough.
Haha... I'll have to invite you to one of my parties up here in
Edinburgh ...
In all seriousness, yes I agree. There is a strong moral imperative
here that no intelligent non-psychopath can ignore. It's tough to turn
that overwhelming imperative into a motivation which doesn't send you
crazy, though.
>
>
> So I think you're doing the right thing.
>
> Let me know if you figure out how to get paid.
> --
> Tim Freeman http://www.fungible.com tim@fungible.com
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:04 MDT