Re: [sl4] I saved the world. I can prove it.

From: Panu Horsmalahti (nawitus@gmail.com)
Date: Mon Dec 07 2009 - 05:10:06 MST


2009/12/7 Glenn Neff <roonie@thisoughttohelp.com>

> Hello everyone. I'm Roonie, I'm new here.
>
> As I've been reading up on the various websites and such in the Singularity
> circles, I can't have helped but notice that many of you seem to believe
> that the best way to save the world is to bring about the Singularity as
> soon as possible.
>
> This is totally unacceptable.
>
> At "takeoff," an AI will be a very rational intelligence: it will operate
> in a world based on rules and discrete logic. And it will most likey view
> our human, intuitive logic with a healthy bit of suspicion. Our thought
> processes will appear to "jump circuits," and that will seem a bit unnatural
> to the AI.
>
> So in order to convince the AI that this is not a completely dangerous
> thing, we will need to be able to show it that we are not in danger of
> destroying ourselves. We will need to have *already* saved the world.
>
> Or it might just decide that we are incorrigible and untrustable.
>
> Besides, the Singularity is not a certainty . . . and if we don't take
> immediate action, we won't last that long anyways.
>
> I know this is not really the place to discuss economics, but you are
> concerned about making sure that the future goes off without a hitch, right?
> Well, basically, this whole financial crisis that's been going on only
> makes sense if the forces of Evil did it on purpose.
>
> Which means that the world is in very immediate danger.
>
> www.thisoughttohelp.com/Riches_To_Rags.pdf
>

It seems that you haven't read about any of the basic texts involving this
subject, however, you're confused about what people (especially people on
sl4 and related "memecomplexes" want to do. Their goal is to create a
"Friendly AI", which by definition is not dangerous. And they want to create
it quickly, because the more they wait, the more there's chance that
something kills everyone (which is also called an existential risk).

- Panu H.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:05 MDT