From: Luke (wlgriffiths@gmail.com)
Date: Tue Oct 20 2009 - 12:26:01 MDT
Link to the Google doc:
http://docs.google.com/Doc?docid=0AeO1BSsSjfiPZGdjZDN2bWhfMTYwZnM4cWN6ZDk&hl=en
Please let me know if you have trouble editing the document. I believe there
is a discussion feature built into the document; though I'm not sure. If
there is, please add a quick comment whenever you save changes, explaining
your rationale. The more concise your comment, the easier it will be for
future collaborators to read the developmental history of the document.
- Luke
On Tue, Oct 20, 2009 at 2:16 PM, Luke <wlgriffiths@gmail.com> wrote:
> @Pavitra: thanks for reminding me of that. It's true - there's a lot of
> talking that needs to get done before we can throw out the haiku of FGAI
> design. I accept this fate, this large-scale discussion, though I can't
> promise I'll read everything before I respond. Not enough time for that.
> @Matt Mahoney: Two points, as follows:
>
> (1) With regard to a definition of friendly AI, and how it needs to
> encompass all those bits, I've got a big problem there. (a) That's
> impossible. So we either need to find a way around that intractable problem
> (i.e. a smaller definition, within the 10^5 bits range or something, 10^2
> would be great but that's obviously wishful thinking), or we need to accept
> that we're not going to be able to proceed, and start "saying our prayers"
> or seeking "enlightenment" or stocking up on heroin or whatever else we need
> to do to face death. This is a completely serious point: if we decide we
> cannot hope to produce this friendly AI, it's better to accept that as
> quickly as possible and decide what we want to do with this short stay
> between the birth canal and the grave.
>
> However, as a programmer I'm tempted to point out that often you don't need
> to see the bits that represent an object, but merely the bits that represent
> its interface. Let someone else worry about implementation.
>
> (2) You said that a test-giver has to be more intelligent than a
> test-taker. I don't think that's necessarily the case. For instance, what
> if the test consisted of: "We're dealing with RSA. Here's an encrypted
> message, and here's the public key that encrypted it. What is the private
> key?" It might take massive computational power to "take" that test, i.e.
> break the code. But it takes orders of magnitude less to both generate the
> encrypted message, and confirm any answer the test-taker provides. This is
> quite similar to the problem of theorem-provers mentioned above. Another
> example of a test could be: "Here's a lab full of standard stock
> ingredients. Create something that will make me trip. I will give you your
> grade one hour after you deliver your answer."
>
>
> As a final point: I'm going to go ahead and put the to-do list up online.
> I warn I'm going to lean heavily on real-world applicability, so we might
> see a constant resonance between mathematical definitions and what I
> consider "executable" actions. I'll be putting up steps like "raise
> $20,000,000 to fund research" and "create a computer with 700 TF to perform
> tests" and the like. Others can focus on the mathematical rigor necessary
> at various junctures. Defining waypoints as mathematical objects, and the
> interconnecting strategies as meatspace man-hours, may be our best bet.
>
> - Luke
>
>
>
> On Tue, Oct 20, 2009 at 11:22 AM, Matt Mahoney <matmahoney@yahoo.com>wrote:
>
>> Luke wrote:
>> > Alright, it is no wonder you guys can't get anything done. I start a
>> single thread, with a single, simple purpose: to trade versions of a single
>> document: the to-do list. And you all can't resist the urge to get into the
>> most arcane, esoteric mathematical bullshit imaginable. "Degree of
>> compressibility". "Test giver must have more information than test-taker".
>> wank wank wank.
>>
>> Because your checklist is wrong. Specifically, the first 3 steps are
>> wrong. This invalidates the last 2 steps that depend on them. To quote:
>> >>>
>>
>> This document implies dependencies only insofar as each step's dependencies should appear before that step. Note that other sequential orderings are possible while maintaining this constraint.
>>
>>
>> [ ] Compile design requirements for "friendly AI". When will we know we have succeeded?
>>
>> [ ] Develop automated tests which will determine whether a given system is friendly to humans or not
>> [ ] Develop automated tests which will determine whether a given system is intelligent or not (IQ, whatever)
>> (these tests should reflect the requirements laid out in the first step: "compile design requirements")
>>
>> [ ] Develop prototype systems and apply these tests to them. Refactor tests as necessary in the case we find that some requirement is not specified in the tests.
>>
>> [ ] Continue refactoring prototypes until we have a system which passes both the intelligence tests and friendliness tests.
>>
>> <<<
>>
>> 1. The definition of "Friendly AI" has an algorithmic complexity of 10^17
>> bits. Roughly, it means to do what people want, with conflicts resolved as
>> an ideal secrecy-free market would resolve them. So your definition has to
>> describe what 10^10 people want, and how much they want it, which means your
>> definition must describe what they know, and each person knows about 10^7
>> bits that nobody else knows. My definition is cheating, of course, because I
>> am pointing to human brains instead of describing what they contain.
>>
>> Also, I haven't defined "people". Does it include embryos, animals,
>> slaves, women, and illegal immigrants? (Don't give me an answer that depends
>> on your cultural beliefs). Does it include future
>> human-animal-robot-software hybrids? Do all people have equal rights or do
>> we weight rights by how much money you have like in a real market?
>>
>> 2. You can't test for friendliness unless you already know that the tester
>> is friendly. How do you know it isn't lying?
>>
>> 3. You can't test for intelligence unless you are smarter than the test
>> taker. Otherwise, how do you know that it is giving the right answers?
>>
>> So the result is we will find another way to build AI. There is a US$1
>> quadrillion incentive to get it done. That's the value of global human labor
>> divided by market interest rates.
>>
>> Just in case you haven't noticed, the internet is getting smarter.
>>
>> -- Matt Mahoney, matmahoney@yahoo.com
>>
>
>
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:05 MDT