From: John K Clark (firstname.lastname@example.org)
Date: Sun Oct 10 2010 - 11:40:36 MDT
On Sun, 10 Oct 2010 "Samantha Atkins" <email@example.com> said:
> It seems you are starting with an AGI that already has a "mind of its
> own" or its own goal structure.
You seem to be assuming that a AI would have some sort of fixed goal
structure, but humans don't have such a thing and I don't see how any
mind could; a simple extrapolation of Turing's results indicates they
would soon get into infinite loops. Turing proved that in general you
can't be certain if you are in a infinite loop or not, so if your goal
tells you to do something and unknown to you it involves infinite loops
then you're sunk. Real minds like humans or any AI you could actually
build wouldn't have that problem because real minds get bored and move
on to something else.
>I am worried about Genie from the Magic Lamp effects. "You get just one wish that I will
>recursively get better and better at following utterly." *shudders*
It's just a fact that except for the very early stages humans will have
no control over a AI and all this talk about ways to make certain it
will remain friendly (by which is meant servile) is just moonshine.
john K Clark
-- John K Clark firstname.lastname@example.org
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:05 MDT