From: Roko Mijic (email@example.com)
Date: Fri Mar 13 2009 - 20:03:49 MDT
2009/3/14 Matt Mahoney <firstname.lastname@example.org>
> > Lastly, there is the issue of impact upon the future of humanity. ...
> > It is bad because the human mind (at least my mind) finds it hard to
> > cope with the immense cognitive dissonance that is created by this
> > weight of responsibility, and the implication that there is a
> > significant chance that the human race will be wiped out by someone's
> > uFAI project.
> 1. AGI won't be developed in isolation. It will be an internet that keeps
> getting smarter, and harder to know when you are talking to a human.
> 2. Everyone, including criminals, will have access to AGI, just like
> everyone has access to email and Google.
> ...unless someone can show me a model of self-improving intelligence in a
> box rushing past the collective intelligence of humanity, I think the focus
> of our attention should be the collective intelligence of humanity.
Well, I could respond "unless you can show me a model of the internet acting
as an agent rather than a large repository of data, I think the focus should
be self-improving intelligence".
But, I like to have an open mind. You are not the only one who has negated
the geographical locality assumption. Maybe you are correct.
If you are correct, what should we do? Probably not much in particular;
simply look forward to a world that becomes more and more intelligent over
time. Perhaps one would recommend strengthening the democratic process and
the rigor of public policy debate.
> -- Matt Mahoney, email@example.com
-- Roko Mijic MSc by Research University of Edinburgh
This archive was generated by hypermail 2.1.5 : Fri May 24 2013 - 04:01:19 MDT