Re: Passive AI

From: Michael Vassar (michaelvassar@hotmail.com)
Date: Mon Dec 12 2005 - 11:17:28 MST


Nick Bostrom said
"There are different options for who should decide what questions could be
posed to the Oracle. It might be difficult to ensure that the best such
option is instantiated. But this problem is not unique to the
Oracle-approach. It is also difficult to ensure that the first AGI is built
by the best people to do it. The question here is, for whichever group has
control over the first AGI - whether it's SIAI, the Pentagon, the UN, or
whatever - what is the best way to build the AGI? "

Of course, we don't need to worry about what the best way to build an AI is
for the pentagon, UN, or whatever, since they will absolutely not listen to
us. How many of the world's most respected minds, far more respected than
anyone here can realistically hope to become, protested nuclear build-up?
To what effect? When large institutions can't deal intelligently with SL0
problems, such as global poverty and fossil fuel related problems, expecting
them to handle SL4 genies is hopeless. It has only been what, 230 years,
since Adam Smith compellingly laid out the case against most tariffs?
Remember Normal Angell?
http://yglesias.typepad.com/matthew/2005/05/normal_angell.html
No, there is far more chance that we are all simply wrong about the
possibility of GAI and transhuman technology in general than there is that
issues that we are dealing with, which are not currently understood even
crudely by somewhere between, 98% and 99.8% of researchers engaged in
academic AI will be managed intelligently by international institutions
within a century.
Seriously, can anyone come up with even a hopeful historical analogy here?

Also
"Find the most accurate answer to
the question you can within 5 seconds by shuffling electrons in these
circuits and accessing these sources of information, and output the answer
in the form of 10 pages print-out. "

Two difficulties with this include the difficulty of bringing an AI to
useful oracle status without utilizing rapid take-off or bootstrapping
procedures and the difficulty of defining allowable methods. I could easily
believe, for instance, that the most accurate answer available within 5
seconds might be reached by moving electrons in such a manner that a
superintelligent optimization process appears somewhere else and creates
havoc while arranging for the oracle to give the correct answer unless the
oracle's utility function was defined impossibly carefully. This seems like
adversarial technique to me. You are trying to predict how the AI might
accomplish a goal and rule out all of the specific possibilities including
ones you haven't thought of.
Defining the utility function associated with the output is also difficult.
Without an understanding of the programmer's minds, the best output might be
a compressed version of the input and the utilized data. To do much better,
the AI will probably need roughly human-level mental modeling, which implies
non-trivial volition extraction anyway.
For what it's intended to do, this approach seems less safe and less
powerful than devising a non general AI in the form of a super-CAD for use
in uploading,



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:54 MDT