From: Philip Goetz (philgoetz@gmail.com)
Date: Mon Apr 10 2006 - 08:28:38 MDT
That's a good idea. I am guessing that many people on this list would
say that an AI would take off too quickly for this to be of use. I
think we can't know that.
Response, however, would probably require coercive, invasive
government authority, e.g., the right to search homes and computers
without a warrant and take or destroy whatever is found there. At
this time, I would be more afraid of an authority set up as an AI
first-response CERT than of an AI. I don't need to give the US govt
an excuse to expand unconstitutional domestic spying.
On 3/23/06, H C <lphege@hotmail.com> wrote:
> Or something.
>
> This was an idea that occured to me a while back that just came to me again.
> As we know there are numerous private, commerical, and academic general
> intelligence development projects going on. I think it is a duty of the
> Singularity Institute to offer some kind of "First response" system for a
> potential Singularity take-off. That is, I think that if such an event
> occured that a project independent of SingInst developed an artificial
> general intelligence, it would definitely be in the interest of humanity for
> such (true) claims to be addressed by a superior panel of Friendliness
> experts (or our best approximation thereof).
>
> The probability of such a system being of any utility is low, I would
> suppose, but I think it would be ridiculous not to set something like this
> up, and to actually advertise the response team's credibility, objectivity,
> confidentiality, etc etc (or whatever).
>
> It might just give humanity the tip of the edge necessary for surviving a
> potential apocalypse (BUT... probably not). I think this is important
> though. What does everyone else think?
>
> -hank
>
>
>
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT