From: Kevin (firstname.lastname@example.org)
Date: Tue Oct 20 2009 - 12:42:24 MDT
On Tue, Oct 20, 2009 at 11:16 AM, Luke <email@example.com> wrote:
> (2) You said that a test-giver has to be more intelligent than a
> test-taker. I don't think that's necessarily the case. For instance, what
> if the test consisted of: "We're dealing with RSA. Here's an encrypted
> message, and here's the public key that encrypted it. What is the private
> key?" It might take massive computational power to "take" that test, i.e.
> break the code. But it takes orders of magnitude less to both generate the
> encrypted message, and confirm any answer the test-taker provides. This is
> quite similar to the problem of theorem-provers mentioned above. Another
> example of a test could be: "Here's a lab full of standard stock
> ingredients. Create something that will make me trip. I will give you your
> grade one hour after you deliver your answer."
> - Luke
How does a mouse administer a test to a human to gauge the human's
intelligence? I could also write a program to crack the private key with
brute force, but I'd hardly call that intelligent.
I've been lurking on this list for a while and I haven't found this thread
very useful or interesting. From my point of view, Luke, you aren't fully
grasping the difficulties of creating a Friendly AI. I think your enthusiasm
is to be admired, but there needs to be a lot more rigor in your material.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:05 MDT