Re: Flawed Risk Analysis (was Re: SIAI's flawed friendliness analysis)

From: Michael Roy Ames (michaelroyames@yahoo.com)
Date: Fri May 23 2003 - 14:45:13 MDT


Bill,

You wrote:

> We don't have to follow an AI's detailed thoughts. The
> inspection is of the design, not the changing contents
> of its mind. If it's initial reinforcement values are
> for human happiness, and its simulation and reinforcement
> learning algorithms are accurate, then we can trust the
> way it will develop. [snip]

Inspecting the design might work (with the caveats well elucidated in
James Roger's earlier post) if the design relied on reinforcement
values. But many designs won't rely on such things, or at least not
exclusively.

Additionally, looking at a design alone for a general intelligence (GI)
is going to always (always!) produce an open-ended answer as to how it
will develop. That is what a GI is, that is what it does: it develops
in an open-ended way.

Furthermore, I would not trust a GI that was based on 'reinforcement
values [...] for human happiness' for one nanosecond! If it is not
smart enough to be more intelligent than my own, humanly evolved,
happiness program - then it is not worthy of my trust. I am already
smart enough to know that the person *in charge* should be the smartest
and best-informed person in the room. I don't see that 'person' as
having the reinforced goal of 'human happiness' - not now, and not in
the future. Sure, that goal might work well every once in a while...
but not in all situations.

I have refrained from replying to most of your recent posts, not because
I agreed with them, but because there was so much that I disagreed
with - and the points of disagreement were so basic - that I saw no
point in discussing them with you. No offence, Bill, but we seem to
have very different views of reality.

Here's wishing you all the best in yours,

Michael Roy Ames



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:42 MDT