From: Eliezer Yudkowsky (sentience@pobox.com)
Date: Wed Jun 16 2004 - 19:35:01 MDT
Metaqualia wrote:
>
> I agree with you that we shouldn't hardcode imperatives. However I am still
> waiting to hear about specific mechanisms of extrapolation and 'case
> studies' for your system. How do you plan to get information out of people's
> brains?
This question has been asked a couple of times, and I shall now give a
strange answer:
It doesn't matter.
Which is to say, this is the point where the FAI theorist waves his hands
dismissingly and says, "Oh, well, that's a mere *practical* question, of no
theoretical interest."
Of course I then take off my theorist hat, and feed the AI information that
starts with games of billiards inputs, and then the AI will interrogate the
programmers to improve its models of them in particular and thereby humans
in general, and work its way up to an offline static copy of the Internet,
and my guess is that when it grows up at least two direct nondestructive
brainscans would be a good idea...
But it doesn't matter.
Once you define the thing-that-you-want-to-approximate - the collective
volition - then you just point the RPOP at the invariant and say, "Guess."
Just that one word. It doesn't really need to be any more complex than
that, from a theoretical perspective. You can calculate the expected
information value and expected information cost, and thereby decide whether
it's worthwhile to give everyone a 20-page form to fill out, or whether you
knew enough to guess the collective volition of humankind just from
watching downloaded anime. But from an FAI theory standpoint, you just
point the superintelligent Bayesian Thingy at something that's been defined
down to a question of simple fact, and say, "Guess."
There's one interesting chicken-and-egg problem where the RPOP needs to
know, right away, that it shouldn't destructively scan everyone just to
find out what their volitions are. (Oops!) But that's a pretty
straightforward problem theoretically - it was described in CFAI, in fact -
and if the programmers remember to say, "Oh, incidentally, there's a strong
probability our collective volition isn't to kill people [to extract their
volitions or otherwise]", that's enough to get the RPOP over the initial
hump. In fact, I should rather hope that long before hard takeoff the RPOP
has sufficient knowledge of humans to guess in its own right that their
collective volition views certain actions as important, meaning a stably
confidently extrapolated collective volition is required to authorize them.
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT