Wait, no, I want to talk tangentially...

From: H C (lphege@hotmail.com)
Date: Tue Oct 11 2005 - 20:30:44 MDT


>From: Tennessee Leeuwenburg <tennessee@tennessee.id.au>
>Reply-To: sl4@sl4.org
>To: sl4@sl4.org
>Subject: Wait, no, I want to talk tangentially...
>Date: Wed, 12 Oct 2005 10:06:39 +1000
>
>-----BEGIN PGP SIGNED MESSAGE-----
>Hash: SHA1
>
>Pope Salmon the Lesser Mungojelly wrote:
>
>| On Tue, 11 Oct 2005 05:32:07 +0000, H C <lphege@hotmail.com> wrote:
>|
>|
>|> As far as computer intelligence goes, it appears to me you are
>|> drastically underestimating the impact. (Yes, the rabbit hole
>|> goes much much deeper, in fact- its something of black hole, or
>|> ... *cough* Singularity... hehe).
>|
>|
>|
>| Well let's not get into a fight over who can be most awed by the
>| incomprehensibility of the Singularity. There's no winner to that
>| conversation; it descends instantly into madness. For instance,
>| we can readily assume that over the next decade or two we will
>| invent The Orgasm Button (prototypes, for women only alas, are
>| already in use) and generally otherwise intimately control our own
>| pleasure and pain responses. Therefore the question "what will
>| posthumans do with these amazing powers" can only reasonably be
>| answered "why, whatever they program themselves to do." There's
>| no sense to it.
>
>The male version, of course, may be operated using a hand crank.
>
>I was just struck by a relationship between the ideas about reward and
>motivation, and Heidegger's idea of "thrown-ness". Now, while I think
>that Heidegger is a big booby-head, some of his sentences are very
>thought-provoking. In one of his more lucid moments, he talks about
>how Dasein (read human minds) is thrown into the world. We can't
>simply change our state of being in the world through a simple force
>of will -- we need to learn about what affects us, how it does so,
>learn the rules by which our brains operate. Of course, us mere humans
>only ever learn some rough approximations of our own behaviour /
>mental function rules.
>
>The suggestion of most Singularists is that a sufficiently advanced
>intelligence *would* understand its own mind, and *would* be able to
>modify its state of being in the world, without having to learn its
>own behavioural rules.
>
>I am yet to be convinced that all possible super-intelligences will
>also have the property of being able to change their mental states in
>this way. I don't *think* it's clear from the literature, but not
>having read every book ever written, I don't know what I don't know.
>What are people's opinion about the relationship of
>"superintelligence" and being "thrown into the world"?

I think there is a very good chance we will be destroyed by the first super
intelligence thrown into the world, unless we can make some drastic changes.
First of all, there needs to exist a broad, but tight community of AGI
researches who are working *together* (and thus can trust each other not to
go spawn a singularity on their own). The community is also going to also
necessarily require a lot of resources (even if it's not a physical
community, people need money to live). The constiutents of the community
must also be free of any money-making requirements, and free from any
pressure of "results" - other than the pressure innate in the problem. Some
in the community may need to hire programming teams to help them on their
particular experiment/design, and may need to purchanse higher end computer
equipment to effectively test their programs. Communication and LEARNING are
essential features this community must have. The problem of AI isn't
something a group of people specializing in different areas can get together
and make happen by fiddling with a computer programming language. Each
person has to individually understand each specialty and how they relate to
others, and this is, I predict, essential to any chance of success. Another
major point is that the community has to agree to certain standards. First
of all, that Friendliness is not just a necessary requirement of AI, but the
foundational structure inherent in the AI. Secondly, they must all agree
that any chance of take-off needs to be handled with extreme care and
deliberation- between all parts of the community.

To the best of my understanding, this is EXACTLY the type of community the
Singuarity Institute is attempting to create- someone correct me (or specify
for me) if I am mistaken.

>
>Cheers,
>- -T
>-----BEGIN PGP SIGNATURE-----
>Version: GnuPG v1.2.5 (GNU/Linux)
>Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org
>
>iD8DBQFDTFOOFp/Peux6TnIRAiSiAKCScPSynI+q6zaC60EOg9iX8GPeXgCeJR8E
>BdNAb3/DK2aHtJyLWq2DZzI=
>=6xba
>-----END PGP SIGNATURE-----
>



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT