From: Samantha Atkins (sjatkins@gmail.com)
Date: Mon Apr 28 2008 - 10:52:26 MDT
On Mon, Apr 28, 2008 at 1:22 AM, Stuart Armstrong <
dragondreaming@googlemail.com> wrote:
> Samantha wrote:
>
> > What decision? We are building something many orders of magnitude more
> > intellectually capable than ourselves and hopefully it will not eat our
> > face. It is a bit odd to being worrying about what primates or other
> > biological creatures it will benefit as if we are likely to have much
> > reasonable control over that.
>
> The emphasis is on "WE are building..." If we assume that we will have
> some degree of control over the final ethical system of the AI (if we
> don't, then we are screwed anyway), then we have some degree of
> control as to whether other animals are included. Since we probably
> won't be able to try more than once, knowing what we need to try is
> vital.
>
Why would we believe we have the ability to determine the "final ethical
system" of entities many orders of magnitude smarter than us not to mention
able to self-improve indefinitely? We have some leverage in the initial
goal system. That is about it. But that in no way says that we can go
about saying that this or that group of beings will benefit in order to
appease the masses or interest them in this work.
> > It is certainly a con job to sell it to the
> > public and claim tax expropriations to build it on such a basis of being
> for
> > the benefit of the "taxpayers" or some other popular target requiring
> > spending gobs of other people's money.
>
> I don't quite see the argument here (unless you're arguing that the
> chances of an AI eliminating us are high). If the AI will refrain from
> eliminating/enslaving/lobotomising us and if it provides great
> benefits to all, then it seems that this has the strongest case for
> coercive taxation (as the expected benefits far outweigh such things
> as social security or a functioning police force).
>
There is no way to make an airtight (to say the least) case that the AI
will provide a relative utopia. There is also very shakey morality behind
the idea that the few who can or think they can more or less understand,
predict and control the AI sufficiently to make such guarantees have the
right to effectively partially enslave the rest of the population to fund
their efforts.
> > > Why not make the beneficiaries all sentient/conscious beings?
> > >
> > What the heck does that even mean? Benefits according to whom?
>
> Survival, and lack of excessive pain, would be reasonable benefits
> even for dumb animals.
>
What is "excessive pain"? In a world of perfect backups and multiple "real"
and virtual "lives" exactly what is "excessive pain" vs. another learing
experience of consequences reasoably to be expected from a given set of
choices in a give environment? Are you saying that all less than pleasant
feedback should be eliminated regardless in all environments?
- samantha
>
> > To the
> > best guesstimate of the best benefits each would desire if each of the
> > beings was much smarter and more sane and more generally enlightened
> than it
> > is or perhaps even dreams or can dream of being? ARGH. Hopefully
> the
> > AGI will not be nearly so sloppy in its thinking. Hopefully we will
> not
> > wait to build AGI until we get sufficient political agreement that we
> have a
> > workable plan for uplifting the sea slug.
>
> I hope we don't put off building an AGI until we know how to uplift
> the sea slug. I do hope, however, that we design an AGI that will
> display some ethical behaviour towards at least some animals.
>
> Stuart
>
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT