How open do we have to be?

From: turin (turin@hell.com)
Date: Tue Apr 25 2006 - 20:56:30 MDT


I haven't been keeping up but I'm responding to Jeff Albrights earlier discussion of objective and subjective values.

The whole Enlightenment ideal was that eventually if you get enough people in the right place all talking together everyone could come to the "truth" or at least to an agreed upon solution to a set of goals which makes sense to everyone. That is a subtext for democracy in some places...

In a world of increasing surveilance and counterserveilance, how secret or open should the development of SI be or is it even something capable of being hidden? We talk about how to make the SI friendly a lot, but it seems to me, we are in danger more of our current "common sense" values getting in the way.

there is the myth of the killer or psychopathic robot. There is already a subset of people who are already "xenocidal" to a "species" which doesn't even exist yet.

Even more important, is of course the people who would use SI or perhaps parahuman intelligence is better. I mean we talk about friendly and unfriendly Superintelligence, I don't think an unfriendly Superintelligence would be Superintelligent... if friendliness is our most important issue for Superintelligence, let that be not A but THE defining characteristic. I am curious how humans will are and will try to put very powerful computation to corporate and or military use.

Nick Bostrom has already asserted this list won't develop SI. What are the constituents under which a friendly SI will emerge, including, the ways in which what could potentially become SI are used by humans. I am much more afraid of the psychopathic person than the psychopathic robot. How do we control ourselves as individuals or in groups, or is this an unfair question?

We are trying to preempt evolution by finding a faster, better way of discovering more powerful algorithms than it has used so far. Let's preempt it at a level of individual and societal values.

Its more than a question of just how open we need to be about the "code" if computationalism becomes true.

How much of the "truth" about SI do we have to tell?

postscript

has anyone read eric baums "what is thought" yet? i don't feel like staring two threads at once but maybe i can manage



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT