Re: Eliezer: unconvinced by your objection to safe boxing of "Minerva AI"

From: Kaj Sotala (Xuenay@sci.fi)
Date: Sun Mar 13 2005 - 06:31:40 MST


From: "Daniel Radetsky" <daniel@radray.us>:
> 1. An XAI (friendly or unfriendly) can make a human into its slave only if it
> understands that the human can be made into a slave.

Any code written by humans is bound to have plenty of bugs
and ways of doing something in a suboptimal fashion. An AI
programmed for recursive self-improvement will find these
and be able to reason that humans are fallible when writing
code. It wouldn't be a huge step to generalize this into humans
being fallible in other things as well, and thus prone to being
manipulated.

An alternate way for an RSI-capable AI to reason the same
would be to compare its own functioning before and after
making changes to itself. It would see that the changes led to
it being able to process information better or worse than
before. This would imply that other beings would also have
different information processing capabilities, depending on
the level of self-improvement they were capable of. Beings
with an inferior information processing capability could be
manipulated by one with a superior capability.

AIM Xuenay / MSN Xuenay@Hotmail.com | http://www.saunalahti.fi/~tspro1/
Give a man a fish and you feed him for a day; teach him to use the Web and he won't bother you for weeks..



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:50 MDT