Re: ethics

From: Samantha Atkins (samantha@objectent.com)
Date: Wed May 19 2004 - 23:20:48 MDT


On May 19, 2004, at 11:44 AM, Michael Wilson wrote:

>>> any black-box emergent-complexity solution is to be avoided
>>
>> That’s equivalent to saying never make an intelligent machine because
>> you’ll never understand a mind greater than your own.
>
> No, it isn't, due to the magic of chunking. Once you've designed a
> given operator, you can reduce it to its preconditions and
> postconditions.
> You can then to proceed to combine operators using formal structures
> that
> produce compound preconditions and postconditions, including ones that
> may have an arbitrarily complex internal structure when instantiated
> (e.g. the state graph of a running search algorithm). It is perfectly
> possible to produce a system too complex for you to understand in
> detail
> without giving up and starting to try things speculatively without
> being
> able to scope their consequences. Positive safety requires that you
> prove that every operator won't do something unpleasant (by checking
> that
> the range of possible results falls entirely within acceptable bounds)
> before you execute it.

Sorry, the capabilities of humans beyond the relatively shallow end of
the system design space to practice such a technique are clear. No
matter how much you chunk we are hindered by computational speed,
conscious attention limitations, inability to use much of our brain
computationally and our limitations keep us from grasping patterns
beyond a certain complexity as may be necessary to build more powerful
tools/programs to offload much of the work. The humans in the loop
severely limit what can be done. Take the humans out of the loop and
you quickly get into the need for strong AI or substantially augmenting
the humans at least. Even with AI complexity would still need to be
reckoned with. What we are attempting cannot be built as if we are in
some Newtonian clockwork universe.

>
>> It would only take a few seconds to write a computer program to find a
>> even number greater than 4 that is not the sum of two primes and then
>> stop, but what will the machine do, will it ever stop?
>
> This is an example of the distinction between acceptable and
> unacceptable
> uncertainty. We don't know if the program will terminate, but we know
> that
> it won't suddenly start modelling the programmer or rewriting its own
> code
> or converting the world into grey goo.

Anything I would want to see develop from these efforts would model
the programmer and exceed the limitations of the programmer. That is
rather the whole point. It will also self-examine and recursively
improve. It will become vastly more capable than we are and much more
aware. Otherwise, what is the point?

- samantha



This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:22:36 MST