From: Durant Schoon (firstname.lastname@example.org)
Date: Thu May 29 2003 - 20:13:06 MDT
> From: "Ben Goertzel" <email@example.com>
> Date: Thu, 29 May 2003 21:49:42 -0400
> > >> Volition is a component of "deep, real human happiness" but I
> > don't think
> > >> it's the only component.
> > >>
> > >> One could construct extreme cases of human minds that were strongly
> > >> self-determined yet were morally and aesthetically repugnant to all of
> > >> us...
> > >
> > > That's not the question, though; the question is whether we, or a
> > > Friendly AI, should interfere with such a mind.
> A nontrivial moral question, right?
> As a parent, I would interfere with my child's self-determination if they
> were going to do something sufficiently destructive. I'd also try to change
> them so that they no longer *wanted* to do the destructive thing.
> Because we have values for our kids that go beyond the value of
> But we presumably don't want a Friendly AI to take on this kind of parental
> role to humans -- because it's simply too dangerous??
Ah, but I could imagine a scenerio where an FAI says "You really don't
want to mess around with this type of nano and that recursive self-assembly
routine. Would you like a mind upgrade to understand why?"
In the case of a human child that young, the child is not even old enough
to know to want the upgrade. And the human parent is not able to transmit
a perfect mental module anyway. I'm assuming an FAI dealing with a
transhuman will able to get around this problem and that humans will have
the choice to augment ourselves mentally into what we currently call
The analogy is about relative intelligences, but in this case absolute
intelligence (can you understand that you need more intelligence) makes
ps - off to see Matrix Reloaded a 3rd time...can I possibly like it that
-- Durant Schoon
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:42 MDT