Re: JOIN

From: ps udoname (ps.udoname@gmail.com)
Date: Wed Sep 20 2006 - 06:51:24 MDT


> Well, so what? If consciousness is purely impossible without quantum
> effects, and consciousness really is the simplest way of producing all
> the observable consequences we attribute to intelligence and
> consciousness, then we can simply make a quantum computer. Progress is
> steady and sure in that domain these days anyway.

True, I did not mean that quantum effects would make AI impossible, but what
it would mean is that work on how to program AI on a classical computer
might not translate to how to create an AI that uses these quantum effects.
Secondly creating the hardware for an AI would be harder if Penrose is
right, as microtubules are far more complex then other theories of how
neurones work. This means that AI and uploading come later and so DNI,
nanotech etc come first, which means we have to worry about grey goo etc.

Well, that's act utilitarianism you're thinking of there, not rule or
> any of the more exotic variants. Eliezer's latest thinking is
> apparently on the lines of his
> http://www.intelligence.org/friendly/extrapolated-volition.html essay;
> it's worth reading.

It's good to see that thought is going into this. However some things worry
me (and I have looked at the PAQ's but i'm still not happy)

"The worrying question is:
What if only 20% of the planetary population is nice, or cares about
niceness, or *falls into the niceness attractor* when their volition is
extrapolated?"

"If I later find I'm one of the 5% of humanity whose personal philosophies
predictably work out to pure libertarianism, and I threw away my one chance
to free humanity - the hell with it."

Yep, assuming humanity as a whole is capable of making good choices in a bad
idea I think, as shown by all the things democratises have done in the past
which we find repugnant today. Taking account of volition helps this,
hopefully democracy based on volition would never have had slavery for
instance.
However, given the number of Christians in the world, what if the world
falls into a Christianity attractor for instance? (Just an example, I don't
mean that Christianity is the ultimate evil)

"The reason for the collective dynamic is that you can go collective ->
individual, but not the other way 'round. If you could go individual ->
collective but not collective -> individual, I'd advocate an individual
dynamic."

Why can it not go the other way round? If collective is fundamentally the
right way, I would assume that if everyone kept growing mentally eventually
all individuals would elect to join the collective.

"What about infants? What about brain-damage cases? What about people with
Alzheimer's disease?"

Wouldn't the long term extrapolated volition of an infant be the same as the
long term extrapolated volition of an adult?

"Maybe everyone wearing a Japanese schoolgirl uniform at the time of
Singularity will be attacked by tentacle monsters."

WTF?

Do they want to be attacked? If not, it's a good argument against collective
volition.

In general, I would far prefer Individual volition, because I trust my
volition, but not the rest of humanity's.

Also, am I correct in understanding that the volition of non-human sentients
does not count?

> > Instead, I think brain-computer interfacing might be a better idear. AI
> > attempts should not give the AI direct control over anything, and the AI
> > should be asked how to bootstrap humans. A big off button would also be
> a
> > good idear.
>
> Your second line is a suggestion for AI boxing. See
> http://sl4.org/archive/0207/4935.html and http://sl4.org/wiki/AI_Jail
> Long story short, not giving the AI direct control over anything would
> probably only work in the early stages and so would be of minimal use.
> Ditto for the big off button.

I realise it might only work in the early stages, but that could still be
useful.

The links are worrying and very surprising, especially as a real AI could be
far better at persuading people to let them out. Perhaps AI boxing is a bad
idear in view of that.
I would like to take part in an AI boxing experiment sometime.

Uploading brains doesn't seem to be a popular suggestion on this list
> either, since their reasoning goes that humans are not the stablest
> and sanest mentalities you can get, and would probably become
> unfriendly in an upload.

By 'bootstrapping' I didn't just mean uploading but also DNI and all forms
of intelligence enhancement, some of which will come before AI. I think AI
will come before uploading, as it must be easier to reverse engineer a
generic brain then upload a specific person. However this would not
necessarily mean FAI comes before uploading.
I realise humans are likely to be fairly unfriendly. The best thing to do
would be to bootstrap as many as possible simultaneously. They would not
fight each other because of game theory (I hope). Then you would have to
hope that at least one of the people bootstrapped would be friendly enough
to upload everyone else.

Sorry about the spelling on my previous post.

> I could say more about myself, but I might as well do the questionaire
> ......
>
>
> ~maru
>



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:57 MDT