Re: [agi] Two draft papers: AI and existential risk; heuristics and biases

From: BillK (pharos@gmail.com)
Date: Wed Jun 07 2006 - 02:08:08 MDT


On 6/7/06, Eliezer S. Yudkowsky wrote:
<snip>
>
> Because this is a young field, how much mileage you get out will be
> determined in large part by how much sweat you put in. That's the
> simple practical truth. The reasons why you do X are irrelevant given
> that you do X; they're "screened off", in Pearl's terminology. It
> doesn't matter how good your excuse is for putting off work on Friendly
> AI, or for not building emergency shutdown features, given that that's
> what you actually do. And this is the complaint of IT security
> professionals the world over; that people would rather not think about
> IT security, that they would rather do the minimum possible and just get
> it over with and go back to their day jobs. Who can blame them for such
> human frailty? But the result is poor IT security.
>

This is the real world that you have to deal with.
You cannot get the funding, or the time, to do the job properly,
because there is always pressure to be the first to market.

AGI is so tricky a problem that just getting it to work at all is
regarded as a minor miracle. (Like the early days of computers and
the internet).
Implement first, we can always patch it afterwards.

A much more worrying consideration, of course, is that the people with
the most resources, DARPA (and the Chinese) want an AGI to help them
kill their enemies. For defensive reasons only, naturally.

When AGI is being developed as a weapon by massive government
resources, AGI ethics and being Friendly doesn't even get into the
specification.
Following orders from the human owners does.

BillK



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT