From: Michael Roy Ames (michaelroyames@yahoo.com)
Date: Sun May 30 2004 - 16:49:58 MDT
Mark Waser, you wrote,
>
> Also, there's always the fact that you really can't
> have only one, monstrous yet fully-integrated mind.
> The cost of maintaining consistency across a mind
> scales with size/complexity. Beyond a certain size
> point, all you'll be doing is maintaining consistency.
>
If the entire mind is to maintain complete consistency for each and every
decision, then you are almost certainly correct. However, full consistency
may not be necessary to increase intelligence. I suspect there are
trade-offs that can be made, and I am currently not smart enough to guess
what short-cuts a monstrous mind might come up with.
>
> You will always have non-integrated parts (also known
> as individuals) [snip]
>
You will have non-integrated parts *if* that is how you build the mind.
>
> Also, it is entirely incorrect to dismiss a conclusion
> because the reasoning process that arrived at it is
> incorrect.
>
How then do you propose to judge conclusions, or answers, that have not yet
been verified against reality but that on which we need to rely?
>
> It's one hour before the destruction of the human race.
> I have twenty doors - nineteen take 50 minutes to
> traverse and come back, one leads to the total
> invulnerability of human beings in 25 minutes. Do I
> want one runner or twenty?
>
An FAI simply forks twenty processes to do the job. FAI != human.
>
> You lost me at "one AI cares only about paperclips,
> while the other cares only about staples". They both
> care about friendliness. [snip]
>
Yep, you missed the argument. The AIs being describe were unfriendly.
Re-read it again without *any* assumptions other than that the AIs have the
goals as given: paperclips or staples.
>
> I don't believe that there is anything that a single
> individual can do that is so far ahead of the curve
> that it can't be explained in a reasonable amount of
> time.
>
Believe it or not there is plenty! I cannot explain any of half a dozen
subjects I know well to some members of my own work-peer-group so that they
could understand it, and I'm not even 'ahead of the curve'. The audience
might simply lack the terminology and concepts needed the understand the
information "in a reasonable amount of time".
>
> [snip] Science isn't magic. Science, by definition,
> is reproducible.
>
AI theory is not like Physics equations. You can't write it down on a
single sheet of letterhead. A good example might be Ben's own Novamente
documentation - it is huge! Are you expecting FAI documentation to be any
shorter?
>
> [snip] normally I wait to use those words until we
> both can point to one pretty low-level fact that is the
> linch-pin of both our arguments on which we disagree.
> I don't think that the community is anywhere near that
> point on one FAI versus many.
>
Perhaps this is the crux of the problem, that you are expecting a low-level
fact but are being presented with a high-level assessment. I don't know
about the 'community', but perhaps this reformulation might help...
It is knowably more difficult to design a set of independent social beings
that tend as a group to constrain their growth and development along a
Friendly trajectory, than to design a singleton to do the same.
There is knowably more risk in failing to get a 6 when rolling a dice
multiple times, instead of one time - even when the dice is loaded.
Redundancy is a good thing in certain situations. It would be very often
true that a group of three humans could address a challenging problem
together much more effectly than a single person, even if they each all have
a Pocket Armagedon (TM), unless one (or more) of them was an intelligent
psychopath.
Michael Roy Ames
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT