From: Krekoski Ross (firstname.lastname@example.org)
Date: Sat Mar 01 2008 - 19:31:39 MST
You can have different levels of control over different cognitive nodes.
Nodes that assign particular imperatives, or assign relative weighting to
different goals can be hard coded and not modifiable without say physical
access. Even if it is a distributed physical architecture, you can force it
to communicate through a central machine, or complex of machines that acts
as a traffic light, so to speak. Each individual node can be 'dumb' in the
sense that it needs to communicate some tasks with this central network. A
criticism to such an approach could be that the individual nodes could
potentially self-organize away from our traffic light. But I'm not sure if
its viable. if each individual node has an area of its programming that is
'visible' to the AI and an area thats 'invisible' and program the
communications protocol into the 'invisible' portion of the programming...
At the very least this would enable humans to have control over the speed of
development at any given stage.
Its somewhat analogous to human cognition. Regardless of our more
intellectual capacities, we still get emotional responses to more or less
everything that are basically hard coded through our DNA and constrain some
of our more destructive behaviours. At the same time, while we have both an
intellectual and more experiential understanding of our own mental
architecture, the two are disjunct-- I am not immediately conscious of all
of my neural pathways, and arguably cannot be (since awareness of specific
neural pathway requires other neural pathways to observe it), and there is
no good reason IMO to think that this would be any different with software
On Sun, Mar 2, 2008 at 2:08 AM, Matt Mahoney <email@example.com> wrote:
> --- firstname.lastname@example.org wrote:
> > On 2/11/08, Eliezer S. Yudkowsky - email@example.com
> > > Similarly, building an AI that knows what you "really mean" by
> > > "Friendly" when you type "Please make a Friendly AI" at the prompt,
> > > FAI-complete, and not any easier than building a Friendly AI.
> > Suppose you could build an AI that obeys your orders, but is not
> > friendly. To build a FAI you still need to do more research. The AI
> > could help you do your research, as such it would be a useful tool in
> > arriving at a FAI.
> If you have to do the research, it's not AI, just a tool.
> -- Matt Mahoney, firstname.lastname@example.org
This archive was generated by hypermail 2.1.5 : Fri May 24 2013 - 04:01:07 MDT