From: Phillip Huggan (cdnprodigy@yahoo.com)
Date: Sun Feb 19 2006 - 20:24:37 MST
Why do we need to influence it during the design process beyond specifying an initial question? To communicate the design, all it prints out is an engineering blueprint for a really efficient ion thruster or a series of chemical equations leading to a cheapie solar cell. Anything that looks like it might destabilize the vacuum or create an UFAI, we don't build. There are many fields of engineering we know enough about to be assured the product effects of a given blueprint won't be harmful.
To significantly reduce most extinction threats, you need to monitor all bio/chemical lab facilities and all computers worldwide. A means of military/police intervention must be devised to deal with violators too. Obviously there are risks of initiating WWIII and of introducing tyrants to power if the extinction threat reduction process is goofed. Obviously an AGI may kill us off.
There is a volume of probability space where the AGI intends to be unfriendly, yet blueprints some useful product technologies (that we can use for manually reducing extinction risks) before we realize it is trying to kill us and pull its plug. I also realize this is a recipe for an AGI that can be used to take over the world regardless if it is friendly or not.
The computer hardware design I was thinking of is here:
http://www.zyvex.com/nanotech/mechano.html
Because it uses mechanical rods and not electricity, the possibility of available AGI magic is reduced. I wouldn't think such a computer would be on the market for two or three decades. Unfortunately, 10 paragraphs from the end it is mentioned in addition to heat and phonon waste energy signatures, there are also gravity waves emitted. I'll be studying diamond rod logic architectures in more detail later.
Nick Hay <nickjhay@hotmail.com> wrote:
Phillip Huggan wrote:
> To clarify, there would be no user interface.
There is necessarily a user interface if we are using the AI to design systems.
We need to influence it to direct the design process, and it needs to
influence us to communicate the designs.
> And nothing but mechanical
> computer components; no danger of physics tricks. A purely mechanical
> computer log. Yes I know AI Boxing talk is passe, but can someone quickly
> point out why this architecture would fail?
This scenario is effectively equivalent to the standard AI box; the box
necessarily has a hole to allow communication between human and AI.
There is nothing to stop the AI producing a design which convinces the human
reading it to release the AI (e.g. by connecting it to the internet). It is
difficult to state with certainty there is no such design as the space of
possibilities is exponentially huge and humans are knowably imperfect.
Alternatively, the AI could design a useful machine which looks definitely
harmless to a human but in actuality releases the AI. As the space of
possibilities is exponentially huge, and as humans have limited ability to
predict a design's true effect, a sufficiently smart AI may be able to invent
such a design.
> If the AGI is friendly, we lose out on a great deal of its immediate
> engineering prowess building only safe product designs it spits out. If it
> is unfriendly, we are saved. It doesn't matter how smart an entity is if the
> jail permits no key.
In any AI boxing scenario humans form a lock, to which any influence the AI has
on us (e.g. any time we look at anything it's produced) gives it access. The
smarter the entity the better it is at cracking locks.
> Phillip Huggan wrote:
>
> Can friendliness be implemented by specifying the AGI shut itself down if vis
> self-coding directly interacts with the universe beyond a very limited set of
> computer operations?
If successful this wouldn't implement Friendless, as SIAI defines it, but a form
of "safe" usefulness. Such an AI has no protection against humans getting what
they wished for but didn't want, and no moral judgement to protect against misuse.
Incidentally, which limited set of computer operations?
-- Nick Hay
> In concert with a relatively inert computer substrate
> such as a molecular computer, how could it cause harm? We could still
> benefit from such a limited architecture a great deal.
---------------------------------
What are the most popular cars? Find out at Yahoo! Autos
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:55 MDT