From: petervoss1 (peter@optimal.org)
Date: Fri Jul 07 2000 - 09:49:27 MDT
I wrote:
> > ..... There is no absolute 'platonic' right. 'Right' is always with
respect to a specified or implied goal and beneficiary.
Eliezer replied:
> ... as far as we know..... This is one of those cases where it's important
to accept both possible outcomes. (I'm not just saying that to be
ostentatiously rational. I really don't know.)
I strongly disagree. Words/ concepts refer to things in reality. A basic
fact of epistemology. 'Right' does not have a coherent meaning if divorced
from goal. We can be certain of this. I don't want to debate this here, but
if you are interested you could read more comprehensive arguments in a book
chapter or two I've written, or in Objectivist literature.
The only relatively simply thing I can suggest here, is to ask you to try
and formulate what the concept of a 'platonic right' would refer to: What is
its definition? What are instances of it? How would we recognize it/
differentiate it from 'teleological right' (a redundancy)?
> ...But to think of myself as being in opposition to the AI, even for
humanity's sake, would be to create a distinction between subject and
object.....
But of course there is and has to be a distinction between subject and
object: you are you, and you are programming an AI - an entity distinct from
yourself. This does not imply that you need to necessarily be in opposition
to the AI, though it may turn out that way. (Think of having a child, but
with more, but not unlimited, control over its design & learning)
>... It would reduce every programmed instinct or taught piece of philosophy
to the fact that Eliezer tried to fool an AI and failed.
Not to fool an AI, but to give it the best knowledge we have (including
moral) as a starting point. At a certain point it will be able to teach us.
> ... Only a human sees "arguments" or "instincts". An AI sees cause and
effect. On some level, programming a seed AI with an instinct ultimately
bears no more information than the flat statement "Peter Voss wants you to
have this instinct".
Instincts and arguments are two different issues: Instincts are essentially
the pre-programmed heuristics needed for bootstrapping - we *expect* the AI
to overcome/ improve those. 'Arguments' or 'weight of evidence' on the other
hand, will always be needed by an AI - it isn't omniscient or infallible.
Many (ultimately all) important questions about reality cannot be decided
deductively. Induction, the very process of discovering regularities
(including cause & effect), always relies on fuzzy 'arguments', subject to
the given context.
------------------------------------------------------------------------
@Backup- Protect and Access your data any time, any where on the net.
Try @Backup FREE and receive 300 points from mypoints.com Install now:
http://click.egroups.com/1/6346/12/_/626675/_/962984941/
------------------------------------------------------------------------
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT