From: Samantha Atkins (samantha@objectent.com)
Date: Sat Jun 02 2001 - 22:22:59 MDT
Gordon Worley wrote:
>
> At 10:04 PM -0700 6/1/01, Samantha Atkins wrote:
> >Gordon Worley wrote:
> > > Here is another thing: an egoistic AI would only be interested in
> >> other individuals, not species. Ve could care less what happens to
> >> humans, be ve might take an interest in a few humans and see to it
> >> that they are uploaded before all of the nearby space is turned into
> > > computronium.
> >
> >On what grounds would an AI be only interested in individuals?
> >I would think it would take some interest in the fate of the
> >species that brought it into being.
>
> This is because egoism is a philosophy based on the idea that all
> action and thought comes from the individual, not a group. An
Then it is a silly philosophy because individuals are
individials within the context of a group. No individual member
of homo sap could transcend the potentials inherent in homo sap
without creating that which is not hom sap. I could imagine
that even an "egotistical" AI might be interested in a species
that had that sort of potential and in either further
actualizing or suppressing that potential.
> egoistic AI would, therefore, only be interested in the actions and
> thoughts of indivudual humans, not the entire species, because that's
> how ve thinks. An altruistic AI (in the traditional sense, not the
> Friendly sense) would conversely be interested in helping the largest
> numbers possible (i.e. the whole species of several species). An
That is a pretty silly implied definition of altruism too.
> egoistic AI would fully encompass the philosophy of egoism (and
> there's no good reason why ve wouldn't be the ideal egoist short of
> bad code) and make all decisions based on the goal of being egositic
> as the ultimate goal, just as a Friendly AI makes all decisions based
Being egoistic as the ultimate goal is pretty empty. What does
it mean exactly? If I have nothing in my head (figuratively
speaking) but being egoistic then I wouldn't have much going for
me as an average intelligence much less as a superintelligence.
> on the ultimate goal of being Friendly (well, there seems to be some
> dispute about this, but if you want just replace the previous example
> with whatever equivilant model you think will work best :^)).
>
> To respond to your thought, an egoistic AI would take an interest in
> the *individuals* that did the ground work to make ver, but not in
> the species as a whole (other than maybe trying to learn the
> species's anatomy, languages, etc. as a side effect of being
> interested in some individuals, but only insofar as this is necessary
> to interact with the individuals).
Is there any signficance to this seemingly empty line of
speculation? I confess that I don't get it.
- samantha
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT