Re: JOIN

From: maru dubshinki (marudubshinki@gmail.com)
Date: Wed Sep 20 2006 - 13:37:49 MDT


On 9/20/06, ps udoname <ps.udoname@gmail.com> wrote:
> True, I did not mean that quantum effects would make AI impossible, but
> what it would mean is that work on how to program AI on a classical computer
> might not translate to how to create an AI that uses these quantum effects.
> Secondly creating the hardware for an AI would be harder if Penrose is
> right, as microtubules are far more complex then other theories of how
> neurones work. This means that AI and uploading come later and so DNI,
> nanotech etc come first, which means we have to worry about grey goo etc.

Well, again, I think you're being overly pessimistic here. From what I
know of quantum computing, it works really really well for a narrow
set of problems (looking at Wikipedia's list of "factoring, discrete
logarithm, and quantum physics simulations" seems about right). Now,
if Penrose is right, the third problem set would be useful, but I just
don't see a great boost to AI programs being given by the first two.
In other words, to me it looks like AI is going to include great gobs
of classical AI and computing.

> It's good to see that thought is going into this. However some things worry
> me (and I have looked at the PAQ's but i'm still not happy)
>
> "The worrying question is: What if only 20% of the planetary population is
> nice, or cares about niceness, or falls into the niceness attractor when
> their volition is extrapolated?"
>
> "If I later find I'm one of the 5% of humanity whose personal philosophies
> predictably work out to pure libertarianism, and I threw away my one chance
> to free humanity - the hell with it."
>
> Yep, assuming humanity as a whole is capable of making good choices in a
> bad idea I think, as shown by all the things democratises have done in the
> past which we find repugnant today. Taking account of volition helps this,
> hopefully democracy based on volition would never have had slavery for
> instance.
> However, given the number of Christians in the world, what if the world
> falls into a Christianity attractor for instance? (Just an example, I don't
> mean that Christianity is the ultimate evil)
>
> "The reason for the collective dynamic is that you can go collective ->
> individual, but not the other way 'round. If you could go individual ->
> collective but not collective -> individual, I'd advocate an individual
> dynamic."
>
> Why can it not go the other way round? If collective is fundamentally the
> right way, I would assume that if everyone kept growing mentally eventually
> all individuals would elect to join the collective.
>
> "What about infants? What about brain-damage cases? What about people with
> Alzheimer's disease?"
>
> Wouldn't the long term extrapolated volition of an infant be the same as
> the long term extrapolated volition of an adult?
>
> "Maybe everyone wearing a Japanese schoolgirl uniform at the time of
> Singularity will be attacked by tentacle monsters."
>
> WTF?

...It's humor, mon. You're supposed to laugh. Admittedly, it's fairly
geeky and turn of the century humor, but still fairly normal.

> Do they want to be attacked? If not, it's a good argument against
> collective volition.

Perhaps they really enjoy it. The point is that from our limited
perspective it is hard to tell: I remember being a child and being
disgusted by the mere suggestion of kissing.

> In general, I would far prefer Individual volition, because I trust my
> volition, but not the rest of humanity's.
>
> Also, am I correct in understanding that the volition of non-human
> sentients does not count?
.....
> > Long story short, not giving the AI direct control over anything would
> > probably only work in the early stages and so would be of minimal use.
> > Ditto for the big off button.
>
> I realise it might only work in the early stages, but that could still be
> useful.
>
> The links are worrying and very surprising, especially as a real AI could
> be far better at persuading people to let them out. Perhaps AI boxing is a
> bad idear in view of that.
> I would like to take part in an AI boxing experiment sometime.

Good luck. I think Eliezer is sick of doing them, so you'll have to
look for someone else.

> > Uploading brains doesn't seem to be a popular suggestion on this list
> > either, since their reasoning goes that humans are not the stablest
> > and sanest mentalities you can get, and would probably become
> > unfriendly in an upload.
>
> By 'bootstrapping' I didn't just mean uploading but also DNI and all forms
> of intelligence enhancement, some of which will come before AI. I think AI
> will come before uploading, as it must be easier to reverse engineer a
> generic brain then upload a specific person. However this would not
> necessarily mean FAI comes before uploading.
> I realise humans are likely to be fairly unfriendly. The best thing to do
> would be to bootstrap as many as possible simultaneously. They would not
> fight each other because of game theory (I hope). Then you would have to
> hope that at least one of the people bootstrapped would be friendly enough
> to upload everyone else.

I'm not following how game theory would help here. If anything, I'd
think that multiple uploads would be even worse - the pressures for
preemptive strikes/defections would be overpowering: suppose one were
an upload and decided to go slow since there is no need to rush to get
in the capabilities to take out all the other uploads. Isn't it
obvious that such a strategy would play right into the hands of the
few bad apples, who could so rush and then obliterate the others? So
even the Friendly ones would be forced to rush into transcendence or
whatever just to thwart the possible unFriendly ones. And if two
managed to acheive the same levels before they turned on each other,
then the side effects could range anywhere from unnoticeable to an
existential risk.

> Sorry about the spelling on my previous post.

Jes' keep it fixed. Solutions are better than apologies, as the saying goes.

~maru



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:57 MDT