From: Chris Staikos (email@example.com)
Date: Tue Nov 09 2010 - 13:25:30 MST
> 2) the AI should obey commands of its creator, and clearly understand who
> is the creator and what is the format of commands.
What if this AI realizes that there is something fundamentally wrong with
the actions of its creator, and that it is in its creator's (and
human-kind's) best interest to disobey? This is similar to the way a parent
will do things for their children that may seem unfair, but in their old age
they can see that it is in fact in their best interest in the long term.
3) AI must comply with all existing CRIMINAL an CIVIL laws. These laws are
> the first attempt to create a friendly AI – in the form of state. That is an
> attempt to describe good, safe human life using a system of rules. (Or
> system of precedents). And the number of volumes of laws and their
> interpretation speaks about complexity of this problem - but it has already
> been solved and it is not a sin to use the solution.
It would make more sense, I think, for the AI to understand the spirit of
the law, as it were. These laws are in place because it is easier to test
questionable situations against a set of laws, rather than evaluate each
situation as its own case. An AI could evaluate situations based on the
"moral values" which gave rise to these laws, and its behavior would likely
be very similar to that of an AI who was just blindly abiding these laws.
The difference is that it would be more flexible, and not victim to
inconsistencies that arise from such a strict set of rules. This is
essentially how humans function - you'd be hard pressed to find a human
being who doesn't routinely break certain laws, though this doesn't make
them immoral to any degree.
I think that it would be more effective for an AI to not want to hurt a
human because it can understand compassion. There's no rule in my brain that
tells me explicitly not to hurt someone - I don't do that because it is not
in my best interest.
If an AI understands on the most fundamental level that our mutual existence
is not a zero-sum game, but that by working in harmony we can optimize our
lives, then you have a truly friendly AI. Anything more detailed seems to
me to get further from the point. Again, I think that this is how we
function - we are certainly selfish beings, but altruism arises out of the
understanding that mutual gain is the highest possible gain.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:05 MDT