From: ps udoname (ps.udoname@gmail.com)
Date: Fri Sep 29 2006 - 04:44:05 MDT
Well, again, I think you're being overly pessimistic here. From what I
> know of quantum computing, it works really really well for a narrow
> set of problems (looking at Wikipedia's list of "factoring, discrete
> logarithm, and quantum physics simulations" seems about right). Now,
> if Penrose is right, the third problem set would be useful, but I just
> don't see a great boost to AI programs being given by the first two.
> In other words, to me it looks like AI is going to include great gobs
> of classical AI and computing.
I think (and this is where it gets really confusing) that since Penrose's
reasoning is that the godel incompleateness theorem means that the brain
must be capble of non algorithmic reasoning, the "factoring, discrete
logarithm, and quantum physics simulations", being algorithmic, are not
sufficent, and his idear is that this non algorithmic thingy is when the
wave function collapses.
So if he is right, the brain is a long way from classical AI.
I need to go read more on this. But the good thing about this is that unlike
a lot of philosophy about the mind, this should at least be experimentally
verifiable.
...It's humor, mon. You're supposed to laugh. Admittedly, it's fairly
> geeky and turn of the century humor, but still fairly normal.
>
> > Do they want to be attacked? If not, it's a good argument against
> > collective volition.
>
> Perhaps they really enjoy it. The point is that from our limited
> perspective it is hard to tell: I remember being a child and being
> disgusted by the mere suggestion of kissing.
I realise it's humor, but it's in the middle of somthing serious - and I'm
sure some of them would enjoy it, but I think the odds are against every
single Japanese schoolgirl wanting to be attacked.
Oh, and if the world does end up getting turned into a hentai anime, if it's
ok with everyone could I be a purple-haired lesbian with some kinda
superpowers? (and I am serious)
I'm not following how game theory would help here. If anything, I'd
> think that multiple uploads would be even worse - the pressures for
> preemptive strikes/defections would be overpowering: suppose one were
> an upload and decided to go slow since there is no need to rush to get
> in the capabilities to take out all the other uploads. Isn't it
> obvious that such a strategy would play right into the hands of the
> few bad apples, who could so rush and then obliterate the others? So
> even the Friendly ones would be forced to rush into transcendence or
> whatever just to thwart the possible unFriendly ones. And if two
> managed to acheive the same levels before they turned on each other,
> then the side effects could range anywhere from unnoticeable to an
> existential risk.
I was having a conversation elsewhere with somone who argued that uploads
would lead to an arms race which would eventually consume the entire
universe and lead to a vastly accelerated heat-death of the univerce
(assuming that no way is found to reverse entropy).
This is likely if there was no strong stable treaty/govenment which would
prevent conflict/arms races. Multiple uploads would be nessary for this
stability: with two uploads all your arguments above are valid, whereas with
1000 uploads a premptive strike may or may not be enough to defeat all 999
other uploads. As we don't know what form posthuman warfare would take it's
hard to know how effective a premptive strike would be, and thus how many
uploads are nessasary for game theory to guarente stability.
I suppose the question is which is more practical: a stable government of
posthumans or FAI.
This largly revolves round which techs come first. I think uploads might
come before FAI, but with the computing power for uploads it should be
possible to brute force UFAI. And somone probbly will. DNI could come long
before any of the above.
Finally, does everyone here agree with collective volition as oppsed to
individual volition?
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:57 MDT