From: Mark Waser (email@example.com)
Date: Sun May 30 2004 - 17:47:06 MDT
> > You will always have non-integrated parts (also known
> > as individuals) [snip]
> You will have non-integrated parts *if* that is how you build the mind.
At some size point, you will either have unintegrated parts or you will have
let a lot of information go.
> > Also, it is entirely incorrect to dismiss a conclusion
> > because the reasoning process that arrived at it is
> > incorrect.
> How then do you propose to judge conclusions, or answers, that have not
> been verified against reality but that on which we need to rely?
C'mon. You remove the evidence for a conclusion when the evidence is proved
to be incorrect. You do NOT automatically mark the conclusion as false
unless the remaining evidence deems it as such. And if there is
insufficient remaining evidence, you don't rely on the conclusion and you DO
escalate the importance of finding evidence regarding the conclusion. What
other answer did you expect?
> > It's one hour before the destruction of the human race.
> > I have twenty doors - nineteen take 50 minutes to
> > traverse and come back, one leads to the total
> > invulnerability of human beings in 25 minutes. Do I
> > want one runner or twenty?
> An FAI simply forks twenty processes to do the job. FAI != human.
I'm sorry, the total computational power available to the FAI was required
to successfully navigate the path behind each door in the time indicated.
Your FAI did successfully navigate the first 12% of the path to human
invulnerability but the human race died because you assumed an omnipotent
FAI. Maybe if you'd had a team of twenty FAIs (which takes less hardware
than an AI of size 20)? FAI != omnipotent
> > You lost me at "one AI cares only about paperclips,
> > while the other cares only about staples". They both
> > care about friendliness. [snip]
> Yep, you missed the argument. The AIs being describe were unfriendly.
> Re-read it again without *any* assumptions other than that the AIs have
> goals as given: paperclips or staples.
No, I deliberately stopped because the argument started with a nonsensical
premise. Re-read my argument that starts with the correct premise for the
situations that we're discussing.
> AI theory is not like Physics equations. You can't write it down on a
> single sheet of letterhead. A good example might be Ben's own Novamente
> documentation - it is huge! Are you expecting FAI documentation to be any
Ben's Novamente documentation is huge but digestible given sufficient time.
You can prove things with it and argue against parts of it. Ben's
documentation allows others to work on his problem and correct his errors
(or, at least, debate them)
I would expect the FAI documentation to also be of substantial size.
Unfortunately, FAI documentation is either mostly non-existent or such a
closely held secret that very few people have seen it.
Which style of documentation better facilitates collaboration and offers a
better chance of a speedy success?
> It is knowably more difficult to design a set of independent social beings
> that tend as a group to constrain their growth and development along a
> Friendly trajectory, than to design a singleton to do the same.
Could you offer some proof of this or arguments that tend to support this?
> There is knowably more risk in failing to get a 6 when rolling a dice
> multiple times, instead of one time - even when the dice is loaded.
But that's the wrong argument. I counter with the fact that it is much less
likely to get a majority of sixes when rolling multiple dice than it is to
get a single six.
> Redundancy is a good thing in certain situations. It would be very often
> true that a group of three humans could address a challenging problem
> together much more effectly than a single person, even if they each all
> a Pocket Armagedon (TM), unless one (or more) of them was an intelligent
You don't give them all a Pocket Armageddon. You give them one Pocket
Armageddon with three keys. At the least, then, the intelligent psychopath
has to murder the other two without accidentally disabling their keys AND
then set off armageddon before being caught (possibly by more AIs).
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT