Re: [sl4] Re: Signaling after a singularity

From: Bryan Bishop (kanzure@gmail.com)
Date: Wed Jun 25 2008 - 08:18:44 MDT


On Wednesday 25 June 2008, Stuart Armstrong wrote:
> >> The aspect of a post-singularity world I'd like to look at is the
> >> absence of signalling. If we had an advanced AI, it should be able
> >> to fully understand the personality and abilities of an individual
> >> human. If it would accept to reveal this information to humans,
> >> then we would live in a society with perfect understanding of each
> >> other.
> >
> > I suppose you're taking the AI-only-approach (Anissimovic)
> > singularity. I don't know what an understanding of personality
> > would entail. The idea of personality is pop psychology anyway, so
> > saying something like this makes me wonder if you know what the ai
> > would be knowing in the first place.
>
> Personality may be pop psychology, but it's not a concept devoid of
> information. It's useful in assigning certain people to certain jobs,

It /is/ devoid of information. It's folk psych. You need to be
addressing the basis of the brain and what allows the variation that is
allowed (or not allowed) by the architectures and such, not going the
other way around. Think of it this way. Reverse engineering a
microprocessor, with a few billion transistors and 128 bits of i/o,
takes something like 2^128 different states of input to test, and then
that's only for one state, what about multiple states? And in multiple
combinations? So that quickly gets to something like (84 different
lines of code)^(2^(128)). Same thing with pop psych. Instead of doing
it that way, try looking at the schematics, and in the case of biology
usually this is done via mutational studies since the mapping between
DNA, protein, and overall result is unclear.

> for instance. Assuming the AI could not just brute force the problem
> and predict everyone's actions in every circumstances (chaos would
> probably forbid this), then the AI would have to rely on some
> simplified model that gives it enough information to make decisions.

Make what decisions? I suspect you are going back to the idea of a
dictatorship of an ai? I don't understand. :-/ More on this below.

> "Personality" is one of our simplified model's ("abilities" is
> another); the AI's simplified models would be much better, but we can
> still call "Personality" as a shorthand.

Ok, I can give you that.

> > As for the economy, just ignore it. As for the culture, I don't
> > see what you mean. Would the information be deleted for some
> > reason?
>
> The old culture would still be there; it's whether there would be new
> ongoing cultures that I'm wondering.

Perhaps on another planet? Not that I'm saying on earth it would not be
so.

> > Why would dictators be the best way to govern?
>
> I'm not saying they would; I'm just saying that all the assumptions
> on what system of government is the best may have to rexamined, and
> what we take for granted now (democracy best) may not be true after a
> singularity.

Looks like you're assuming a spectrum with democracy and dictatorship at
two opposite ends of the line. That doesn't sound truly reconsidered.
And I don't see why you're even putting an ai into those
considerations, but I'm willing to discuss alternative forms of living
and having lots of people around, but again it doesn't necessarily
involve a giant Holy Ai at the head of the whole beehive. I don't know
where this idea comes from.

> As for why governing, there will still be a finite (though huge)
> amount of ressources available to anyone, and there will still be the
> problem of violence/coercion between agents. Some system of

Maybe. But that violence can be solved. It's a singularity, not a pot
luck dinner. Instead of having one copy of yourself running around, try
having some redundancy and backups so that if something malacious does
happen to you, you're not dead. It's just good practice. Many
programmers already do these practices. Daily backups and such.
Repositories. Keeping track of diff's. etc. So calling for an ai in
that case doesn't seem like a good idea. Just make sure people are
informed and give them the tech. Since it's a singularity, I'm sure
giving them a kinematic self-replicating machine will not be a problem.

> government would still be needed to adjudicate conflicts. And this

Eh?

> system must have access to a higher level of violence than any
> individual agent, if there is any chance that agent could misbehave.

My stick is bigger than yours? That's the best we can come up with?

> It might be a collaberative, communal hippy government, or it might
> be an AI dictatorship, but it would still be a government.

No, it could be no government at all. Have you considered that these
technologies are liberating ? That they don't force us to rely on
governments? That they allow us to live our lives without the
restrictions that governments used to be there to help face up to? etc.

- Bryan
________________________________________
http://heybryan.org/



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT