Fw: Re: Society Saveable? was (Different View of IA and Transparent Society)

From: Will Pearson (w.pearson@mail.com)
Date: Fri Apr 26 2002 - 10:13:45 MDT


> > ftp://ftp.vub.ac.be/pub/projects/Principia_Cybernetica/WF-
> > issue/Social_MST.txt . Based on meta-system transistion theory.
>
> The server is refusing my connection. :-/
 
It is not actually my views, I shouldn't have given that impression but I agree with them. Try going through this page, http://pespmc1.vub.ac.be/SUPORGLI.html there is a link with a red arrow next to it with the text Heylighen & Campbell, 1995 .
 
> If you haven't, you may want to read some books about evolutionary
> psychology. I had a pretty optimistic view of humans until I learned
> about how our brains work and how they got that way. I'll admit,
> evolutionary psychology is mostly theory, but it's the only existing
> theory that makes much sense. You can find some references to a few
> good texts on the subject on my Singularitarian reading list:
 

I'll check out some more. I read an Introduction type book, I shall have to read some more. I have read some criticisms of the field, a while ago, in that, they can find explanations for lots of different things if they suppose that the feature increases the fitness of an Individual, but have no evidence for that increase of fitness. For example male abuse of pre-reproductive children. You can posit any number of theories about why this happen, or it could be a side affect of the general urge to mate, combined with aggressiveness. I shall have to read more about it to make up my own decision.

 
> I think many of us on this list are for rewiring human brains. I'm all
> for it. Of course, most of us want to do it post Singularity once we're
> uploads. While we would certainly benefit from the right kind of
> rewiring pre Singularity, there are logistical issues like changing
> everyone and making sure that the rewired brains are robust enough to
> keep from letting themselves be taken over.
 
 A possibility I see is a small group of individuals getting together an rewiring there brains, becoming a collective. They would act together as a colony, some people would try and manipulate them. This might succeed. Then another small group would try with those gaps plugged. If they are successful in this they will be more successful in life (as they have more trustworthy knowledge), people would want to join them because they are selfish :). If not they would gradually increase because of evolutionary pressure (being fitter). Oh and there may be more than one of them in the end, possibly because of fission.
 
Maybe I should try and build a coherent picture or all these ideas, rather than piecemeal. As if I don't have enough to do...
 
I don't see this sort of scenario happenening soon, and it won't be pretty when it does happen, so if your version happens more quickly then fair enough. I have issues with friendliness (or more precisely modifiable logical super goals using probablistic methods, and also the logic of letting a human build a friendly AI) and so the uncorruptibility of an AI, which I will try and prepare into a coherent argument.

I also think that the main ideas of AI's on this list(webmind and seed) ignore the fact that digital evoluition can be lamarckian, because it does not have the central dogma of embryology, that it impossible to deduce the ideal genotype from the desired phenotype (due to vast non-linear mappings), without testing lots of different phenotypes, this means it may not take millenia for the mind to evolve. My idea of the mind is a vastly cut down version of psynet starting with very simple creatures. A child goes from very simple logic to complex logic, is it not possible for a computer mind to do the same.

Anyway think of replies to this, and I shall present all my views along with some reasons that shortcutting mental evolution might not be a good idea, and we can have a good old argument some time. (A good reason for me to work on it is to get a feel of self-modifying code, without having to be part of a team). My system a brief introduction. Imagine Tierra, but instead of fixed outside mutation, an internally defined mutation to the offspring, this mutation can act on the chunk of code that defines mutation making it better or worse. Possibly improving the mutation and also how the mutation mutates, becoming self-improving. Now all these programs can act together or not depending on the problem set it, if they do they can be seen as a mind. There is lots of other stuff in my theory as well.

A quick question to get you thinking. Is memetic (or actor/agent if you want to call them that) evolution in the human minds entirely logical? If not do your theories of mind cover humans?

  Will

-- 
_______________________________________________
Sign-up for your own FREE Personalized E-mail at Mail.com
http://www.mail.com/?sr=signup


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:38 MDT