From: king-yin yan (y.k.y@lycos.com)
Date: Fri Aug 15 2003 - 15:21:06 MDT
Hi Nick,
>Formal mathematial systems are one tool humans use to solve problems. I don't
>think it's well suited to the task of describing and transferring human moral
>structure and content to an AI. I don't think this process can be well
>described mathematically.
>
>*** UPDATE, . formal systems humans create directly via axioms
>
>"Friendliness" is very different to "friendliness". A Friendly AI is one that
>shares the moral complexity we all share - the adapatations we use to argue
>and think about morality as we are now. The Friendly AI doesn't quite share
>all human moral complexity, not all parts are desirable (eg. selfish aspects
>of morality), but humane moral complexity the kind of morality structure (and
>content) we'd want to have.
>
>Friendliness isn't a formal system, certainly not in the moral law sense -
>they're far too fragile. Typically manipulating or adding axioms vastly
>change the system. Formal systems in general lack the flexibity and structure
>of the human thoughts that create them. We don't want to transfer the moral
>codes of law that humans can create, but the ability to create those code in
>the first place. The programmers don't decide what is right and wrong.
There is a dilemma in here, on the one hand a formal system (made of
simplistic rules and thus mathematically analysable) will be predictable
and safe, but it can't handle the moral complexities that we would want.
On the other hand, the complex moral structure that you described
above will require a connectionist approach or something equivalent.
Meaning that it has distributed representations, graded response,
generalization, and being able to be *trained*. Then you have a big
problem. Practically such a connectionist network is quite similar to a
human being, but much smarter. Every human would end up trying to
talk to this AI like crazy in order to influence its behavior in their favor...
> [...]
>
>You should really read CFAI: Beyond Anthropomorphism :) Our position right
>now, with a whole bunch of near-equals getting more and more powerful
>weapons, is vulnerable. Indeed every existential risk makes us vulnerable,
>that's 1/2 the point of a Singularity in the first place. We can't eliminate
>all risks, or remove all vulnerablity, but decrease it.
I can understand why you're alarmed by intelligence augmentation, what
you say is basically: "Computational power is dangerous, let's concentrate
all the power in one AI and let it rule". But you seemed to downplay the
fact that 1) the Friendliness system is designed by human programmers;
2) it needs to be trained by humans. I'm afraid a lot of people will be
skeptical about this.
>FAI orginated superintelligences aren't like a tribal leaders, or tribal
>councils, or governments, or any other [human] structure which is
>superordinate to other sentients. The SI doesn't have, nor does it want,
>political control as humans do. It wants sufficent control to ensure bullets
>simply don't hit anyone who doesn't want to be shot, for instance, but it
>doesn't want sufficent control to ensure everyone "agrees with it", for
>instance. Anthropomorphisms, that is almost any comparison between AIs and
>humans, don't help understanding.
That sounds like a universal political solution. The FAI will decide whether
wars should be fought or not, who are criminals and deserve what kinds of
punishment, etc.
>> Personally I think the most appealing solution is to let people augment
>> themselves rather than create autonomous intelligent entities. But we
>> don't have a direct neural interface to connect our brains to computers.
>
>Personally I think that's one of the least appealing solutions. Humans are
>autonomous intelligence entities with reams of known flaws. Fears about an
>entity, or group of humans, rising among the rest and subordinating them are
>far more founded than those about AIs because, historically speaking, that's
>what humans *do*. Often they proclaim they're doing the best for everyone,
>and often they'll believe it, but rationalisation distorts actions in a
>self-biased manner. Unless there's some way to augment everyone at the same
>rate, and in fact even then, it doesn't look good.
What you're depicting here is dangerously close to dictatorship. On the
other hand, free augmentation is actually not that bad. Just because
humans are free to augment their intelligence does not mean that they
will start using that intelligence to harm others. Most likely a kind of
morality will emerge in the population so no one will have an absolute
advantage over others.
>Part of the appeal of the Friendly AI approach is starting from a blank slate.
>Making a mind focussed about rationality and altruism, not politics.
It's much more complicated than that, if you look closer...
>However, there is a matter of time here. think it's far easier to spark a
>superintelligence from an AI than from a human brain, in the sense that I
>imagine it'll be possible to do the former first. So attempts at solely
>augmenting humans will be too late, since I can't see everyone stopping their
>AI projects. However things would be very different if the human augmentation
>route to superintelligence was significantly faster than the AI route.
There's an even more important question: Whether the AI can really be
controlled by its own designer. On the one hand you want the AI to have
common sense. That requires a connectionist appraoch (or something
similar). Once you have connectionism then the AI is pretty much
autonomous. Then it is somewhat like a human child. That would be like
all humanity having only *1* kid and giving him/her all the power.
Now why are you so sure that a connectionist system will behave as
you want it, given all its complex characteristics?
>(for further details here, see http://intelligence.org/intro/whyAI.html)
Thanks, I've read that, and I've browsed through CFAI briefly.
>Mind you, various human augmentations could certainly help things - perhaps a
>little device that alerted humans when they're rationalising. Or something
>that increased the level of mental energy without compromising the ability to
>think properly. But augmenting or uploading humans, as the sole route,
>doesn't seem either desirable or practical.
The problem is AI's are likely to take over rather than care about us.
Unless we figure out a way to control them. If we do, then it is a kind
of augmentation (external rather than implanted).
Augmenting/uploading is not necessarily undesirable. Sure, some people
will end up more intelligent than others. But that's just the way human
diversity is always like. No one is likely to attain absolute power, so I
think that's fine.
>> Unless we get uploaded otherwise we'll have to rely on LANGUAGE to
>> communicate with computers. This *linguistic bottleneck* is the hardest
>> problem I think.
>
>We'll have to rely on thoughts, and the things they do. Using human language
>to directly communicate with an AI is more of a final step - the AI has to be
>quite mature to understand human language directly, I suspect. But there are
>other ways to communicate, or more generally transfer information to the AI.
>For instance, posing simple problems for the AI to solve.
Question: How can you have an AI understand you, without letting it be an
autonomous entity? On the one hand we want a tool, on the other hand
we want to make sure it will not become the master. And actually the crux
of the problem comes from the linguistic bottleneck. Imagine if we have
direct neural interfaces on the back of our necks, then we'll all be busy
playing with add-on modules now, with magazines advertising all sorts of
gadgets, like body-building etc.
YKY
____________________________________________________________
Get advanced SPAM filtering on Webmail or POP Mail ... Get Lycos Mail!
http://login.mail.lycos.com/r/referral?aid=27005
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:42 MDT