Re: De-Anthropomorphizing SL3 to SL4.

From: Thomas Buckner (tcbevolver@yahoo.com)
Date: Wed Mar 17 2004 - 17:20:31 MST


Michael Anissimov <michael@acceleratingfuture.com> wrote:
> I still
>think that the vast majority of people who consider the ethics of
>advanced AI are worried about one of two things;

>1) anthropomorphic goals emerging spontaneously within the AI
>2) mechanomorphic (like a gun) or anthropomorphic (like a slave)
>exploitation of AIs by human agents

>When the real problem (as many people on this list know) is

>3) abstract failures of Friendliness within a very foreign and
>difficult-to-imagine goal system structure

I have read elsewhere the view that violation of free will would qualify as a failure of Friendliness; i.e. that the AI should not force people to do things they do not wish to do (or prevent them from what the do wish to do). But it will have no choice! Many humans have goals which are self or other-destructive (and some even wish to destroy all, as Hitler (at the very end) wished for a bomb that could destroy the entire world. To avoid coercion the AI would have to lapse into inertia, which I doubt it would choose.

Even if the FAI is downright maternal toward us there will be coercion, even if it is disguised by creating a 'bubble reality' wherein we think we are doing as we please, without being able to affect anyone or anything outside our own minds (some days I suspect this is already going on).

Tom Buckner

 

Do you Yahoo!?
Yahoo! Mail - More reliable, more storage, less spam



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:46 MDT