From: Mike & Donna Deering (deering9@mchsi.com)
Date: Sat Jul 06 2002 - 17:40:21 MDT
Eliezer writes: "If anyone wishes to administer a lie-detector test on this, such as lie
detection technology is in modern times, I'll take it. If anyone comes
up with a better lie detector test before the Singularity I'll take that
as well. I am not an AI; if I were lying you would have a reasonable
expectation of catching it using your native human abilities, and the
fact that you have not is evidence, though not proof. I currently
visualize, although I am not completely sure and can make no promises,
that the grounding of the Friendly AI structure will be such that the
other programmers on the project can verify that, despite being
sensitive to the motives of the programmers, the AI will have been told
to resist hidden selfish motives."
The offer to take a lie detector test is not evidence until one is actually administered. I could make this offer without much expectation that one would ever be given. And native human abilities have failed to catch many a con man (which I'm not saying you are, probably haven't had time to learn a skill like that) or a psychopath which I have no reason to think you are not. As for the other programmers on the project, I expect that you could easily influence their selection to those suitable for your purpose. In my opinion these two options are at least equally plausible:
1: A nine year old genius is struck by the realization that the world is need of saving and that he can make a major contribution to this effort decides to code the first AI.
2: A nine year old genius suffers a traumatic rejection by someone of the opposite sex and decides to take over the world in order to payback the world for the pain suffered and to secure his position of superiority. Searching around for a way to implement this decides to code the first AI as his tool.
But for the record, I admit that I tend toward paranoia, seeing hidden motives and conspiracies everywhere.
Eliezer also writes: "I don't think selfish AI programmers executing multiyear altruistic
masquerades constitute a major threat to the Singularity, but I am
always on the lookout for ways to reduce threats to the Singularity."
Although I can't logically differentiate between these two options, am still left with the options of existential risks through knowledge enabled weapons or letting someone initiate the Singularity.
Mike.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT