From: Michael Roy Ames (michaelroyames@yahoo.com)
Date: Sat May 17 2003 - 16:40:42 MDT
Bill Hibbard wrote:
> [snip]
> It will be impossible for regulation to prevent all
> construction of unsafe AIs, just as it is impossible to prevent
> all crimes of any kind.
Very true.
> But for an unsafe AI to pose a real
> threat it must have power in the world, meaning either control
> over significant weapons (including things like 767s), or access
> to significant numbers of humans. But having such power in the
> world will make the AI detectable,
Theoretically, yes. Practically, maybe not. Regular humans have such
powers in the world right now, and they are certainly not always
detected before they do something bad... or even most of the time.
> so that it can be inspected
> to determine whether it conforms to safety regulations.
This is too much of a leap. Detection, I'll give you, is possible.
Even if impossible to guarantee. Inspection for conformation to some
set of regulations is simply pointless because:
a) an AI will be able to self-modify into something different, thus
making 'point-in-time' inspections of little value
b) inspecting an AI will be an incredibly complex and difficult task
requiring the intelligence and tracking abilities of a phlanx of highly
tallented people with computer support, so it will take a lot of time to
complete, rendering such inpections out of date and therefore of little
value.
> I don't think that determining the state of mind of the
> programmers who build AIs is all that relevant, [snip]
Your opinion here is held by only a small minority of people. I
disagree with it because, in humans, state-of-mind effects what people
do. A person who wishes to improve freedom and security for all with
the minimum of violation of volition is going to behave quite
differently from a person who wants to be Emperor of the Universe.
> just as the
> state of mind of nuclear engineers isn't that relevant to
> safety inspections of nuclear power plants.
On the contrary, it is the most relevent aspect of inspections. If the
nuclear engineers in charge of the plant don't give a damn about safety,
then when the power plant does break (and they all do) it is unlikely
that proper corrective step will be taken.
> The focus belongs
> on the artifact itself.
Correct. But your statement seems to imply that the 'artifact' is
unchanging. This is untrue for any of the mind designs I have seen so
far, including the human mind. Minds change, and an AI is going to be
faster and more capable at changing its mind than humans are.
> The danger of outlaws will increase as the technology for
> intelligent artifacts becomes easier.
We agree on this.
> But as time passes we
> will also have the help of safe AIs to help detect and
> inspect other AIs.
Again, you assume to much. You are assuming here that we will have safe
AIs before unsafe AIs exist. If this does not come to pass, then:
pooof!
--- Bill, I like that you are talking about this subject. But it seems we view the world rather differently. Please take my comments in the spirit of constructive argument as that is how they are intended. Be well, Michael Roy Ames
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:42 MDT