From: Samantha Atkins (email@example.com)
Date: Wed Apr 04 2001 - 22:15:18 MDT
"Christian L." wrote:
> Dale Johnstone wrote:
> >Christian L. wrote:
> > >When I first started subscribing to this mailing list, I thought that the
> > >goal of SingInst was to build a transhuman AI. I was wrong. The goal is
> > >obviously to build a Utopia where Evil as defined by the members of the
> > >will be banished. The AI would be a means to that end, a Santa-machine
> > >uses his intelligence to serve mankind.
> >I can't speak for SingInst but as someone who's working towards the same
> >goal, I'm in it because I see a possible way to do away with death and
> >misery once and for all. Although I'm obsessed with AI, I'd switch to
> >collecting detergent coupons if that would do any good. Unfortunately it
> >doesn't. Building a transhuman AI though does.
Of course the rub is that it might end death and misery forever by
simply wiping out all biological sentients, by accident or on purpose.
Or we might find that the Sysop no matter how wise and benevolent and
transparent intolerable for the types of creatures we are but be unable
to choose differently. There are many possible scenarios. There are
also many possible scenarios without a Sysop. In some of them the
singularity fizzes out through some run of bad luck and human
incompetence. In others things along the way multiply human misery a
lot while reducing it for a few. In still others enough of us grow up
to our new powers to create a more viable society better able to deal
with the exponential ramp. In others we wipe ourselves out.
But it is not at all obvious to me that building a transhuman AI is the
most likely good outcome solution or the most likely to be
> It might, yes. I agree.
> >List members do *not* get to define what is evil and what is banished.
Do some degree we had better or there is little reason to believe this
thing is actually going to be good and worthwhile to support.
> "To eliminate all INVOLUNTARY pain, death, coercion, and stupidity from
> the Universe."
Plus or minuse a lot of definitions starting with "involuntary". On the
face of it the list looks like eliminating negative effects of actions.
To the degree this is so I am not sure the resulting non-transcendent
beings would ever learn or develop further.
> >The world by and large hasn't woke up to the facts yet. It's clear that
> >things aren't going to get any better by themselves. I hope you can now
> >understand the urgency in our desire to apply a little transhuman
> >intelligence to the problem.
> I assure you, I did understand it before. I just don't see the point in idle
> speculation about the actions of eventual SIs. It will do as it pleases. If
> we manage to program it into Friendliness, it will be Friendly. Maybe it
> will ignore humans. Maybe it will kill us. I don't know.
> My interests lie in getting to the Singularity. After that, the SI is
> calling the shots. I don't think that you can plan ahead beyond the
> singularity, and I certainly am not going to. You can do your best in trying
> to program a Friendly AI, but in the end, the AI will be in charge.
Assuming you care about anything other than an ever-increasing
intellect, exactly what would motivate you to create something that
quite possibly will destroy everyone and everything else you ever cared
about including yourself?
We are spinning the barrel and pulling the trigger in a cosmic game of
russian roulette. The barrel holds thousands of rounds and only a few
chambers are empty. If we "win" we either are set for all eternity or
get the chance to play again some other time. Except that it is the
entire world and all of us forever that the gun is pointing at. To do
that we have to be very, very damn sure that there is no other way
and/or that this (building the SI) is the best odds we have.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT