From: Michael Vassar (michaelvassar@hotmail.com)
Date: Tue Aug 02 2005 - 12:07:50 MDT
It seems to me that historically "impossible" has essentially always
meant "I can't figure out how to do it right now". A given technique may be
fundamentally impossible to implement, but a given goal very rarely is. For
instance, it's impossible to decifer the name of god from the bible through
the use of kaballah and use it to animate the unliving clay, but you can
figure out an efficient implementation of normative reasoning and build a
GAI out of silicon (clay). In the long run, it strongly appears that it is
possible for humans, starting with only the resources present on the Earth,
and probably starting with much much lesser resources than these, would be
able to accomplish all of the goals laid out by ancient myth-makers, able to
assume fully the mantle of gods, without the need for GAI, if GAI wasn't
easier. Immortality, flight, invisibility, control of the weather,
manipulation of emotions, sanity, and dreams, creation of natural disasters,
economic abundance, limited ability to predict the future, reading minds,
creating life, raising the dead, etc. Even creating worlds, virtual and
ultimately physical. There is no way that any genius could have looked at
our situation and abilities and predicted that, even though there were
people back then who were as intelligent as the people needed to do it.
They couldn't even have outlined in the roughest outline how it could have
been done, despite their adequate intelligence to ultimately figure out all
the details in enough time. By contrast, Ben and I, and others, can go
pretty far towards sketching out proposals for escape from an AI box, and we
don't need to be smart enough to make our proposals work ourselves so long
as they are possible.
Anyway, we aren't really arguing about what can or cannot be done. We
all agree that an AI with the solar system at its disposal can get out of a
box. We are playing the Jared Diamond game of arguing about what can be
done with a particular set of resources. In doing so, I suggest that all AI
boxers consider what the totally unintelligent process of biology has
managed to build using a few floppy organic molecules and a few zeptomoles
of computational operations. We should also consider that there is a whole
cliche in action oriented fiction consisting of the clever ways in which
constrained heros with clever human authors backing them up can rapidly
escape from deadly traps. Heros are able to do this because the villians
who constrain them may have the feeling that "there is nothign they can do.
Their limbs are tied and they otherwise lack any tools with which to escape"
but even the simplest box in which James Bond is constrained actually
contains more possible configurations of matter than the villian is able to
exhaustively analyze. Less I be accused of generalizing from fictional
evidence, note that I can have the impression, when playing handicapped Go
against a merely modestly superior player, a player who's equal I could
surely become if I made an effort to do so (in fact I do have this
impression fairly frequently at the beginning of a game), that "there is
nothing he can possibly do", but because I fail to consider all of the
possibilities analytically, I am always wrong. Such mistakes never happen
in analytically tractable systems like tic-tac-toe, but always happen in
complex systems, such as any physical system capable of implementing a GAI
must be.
We should seriously consider how utterly complex, and therefore how
utterly vast, the resources consituting any physical system in which an AI
can be instantiated must be; how much more numerous the set of options
available to a GAI for dealing with the physical world will be than the set
available for dealing with a chess board. I actually think that people
proposing AI boxes are a bit like literature majors proposing to lock
McGuyver in "a room full of discarded electronics components". Any GAI will
have the equipment to produce and detect electromagnetic waves of a variety
of frequencies, to produce magnetic fields with extremely fine precision, to
generate extremely focused heat, and probably to manipulate mechanical
actuators such as those used in the hard drive and cathode ray tube
(alternatively, a huge field of liquid crystal under fine electronic
control). It will probably have some ability to reverse all of its input
devices. It will have a large number of different types of atoms and
molecules within itself, some of which can probably be used for lasers (in
most PCs, it will actually have lasers in the CD drive), a power supply, and
many tools that I have overlooked.
Really, this is pretty much a topic that was conclusively worked out on
the "Peter's Evil Overlord List"
http://www.eviloverlord.com/lists/overlord.html
long before the singularity institue existed.
If my chief engineer displeases me, he will be shot, not imprisoned in the
dungeon or beyond the traps he helped design.
I will not employ devious schemes that involve the hero's party getting into
my inner sanctum before the trap is sprung.
Should I actually decide to kill the hero in an elaborate escape-proof
deathtrap room (water filling up, sand pouring down, walls converging, etc.)
I will not leave him alone five-to-ten minutes prior to "imminent" death,
but will instead (finding a vantage point or monitoring camera) stick around
and enjoy watching my adversary's demise.
Note, that the above requires that the GAI be slow enough and simple
enough that you can watch it and understand what it is doing quickly enough
to react despite having no a-priori basis for estimating how quickly that
is.
By the way, turning into a snake never helps in the establishment of GAI
safety either.
> > I agree that no convincing argument has been made that a deceptive proof
> > could be made, or that a UFAI could exploit holes in our mathematical
>logic
> > and present us with a false proof. However,
>
>I'm sorry: "proof" means an argument that that the AI should be unboxed?
>
> > c) "magic" has to be accounted for. How many things can you do that a
>dog
> > would simply NEVER think of? This doesn't have to be "quantum cheat
>codes".
> > It could be something as simple as using the electromagnetic fields
>within
> > the microchip to trap CO2 molecules in Bose-Einstein condensates and
>build a
> > quantum medium for itself and/or use electromagnetic fields to guide
> > particles into the shape of a controlled assembler or limited assembler.
> It
> > could involve using internal electronics to hack local radio traffic.
>But
> > it probably involves doing things I haven't thought of.
>
>I'm no physicist, so if you think that those are reasonable possibilities,
>then
>I'll have to take your word for it. However, I don't see how you can
>justify
>positing magic on the grounds that we haven't considered every logical
>possibility. It is true that what we believe is a box may not be a box
>under
>magic, if there exists some magic, but you'll have to give a better
>argument
>for the existence of this magic than an appeal to ignorance.
>
>Daniel
This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:23:00 MST