From: Peter Voss (email@example.com)
Date: Thu Jul 14 2005 - 18:35:33 MDT
By "mind of its own" I simply mean the common sense notion of 'pursuing your
own goals' versus 'doing what you're told'. If AGIs /SIs come in both
flavors then the question arises: is one more dangerous than the other? Why?
By 'run-of-the-mill AGI' I simply meant an AGI not specifically designed to
be 'Friendly' or 'Unfriendly', but 'simply' to perform functions that
require human-level intelligence.
From: firstname.lastname@example.org [mailto:email@example.com]On Behalf Of Eliezer S.
If a cognitive system has the ability to predict reality and compose
plans to manipulate it, which I generally take as the definition of AGI,
what is left to say - what specific features are you talking about - when
ask whether the AI has "a mind of its own" or "doesn't"? I am honestly
confused here. What's the difference between "AI" and "mind of its own" -
what functionality are you divvying up between one and the other?
> What are specific risks that a run-of-the-mill AGI poses?
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT