From: Matt Mahoney (email@example.com)
Date: Sat Jul 19 2008 - 12:11:50 MDT
--- On Sat, 7/19/08, CyTG <firstname.lastname@example.org> wrote:
> - Imagine merge. Okai this is how I think about it. With
> lack of better
> understanding and absence of a true gai I imagine it to be
> something like this
> 1. Fred Hoyles The Black Cloud.
> It's reasonable to assume a future intelligence to be
> massively parralel and
> that entire wholesome cognetive domains are processed
> independently from the whole.
> 2. Splicing of neural nets.
> You may have experience with practical AI of today or not, but lets take
> neural nets for example basicly we're talking classification and function
> approximation. It's viable that you have two seperate
> domains, two seperate
> trained nets and now you find that you'd like to splice these together so
> the resulting net is the sum of a and b (This is what i call neural
> algebra(tm), wich could be of big help with future neural implants for
> cognetive enhancements as well).
> So that is how I imagine merge. A subset of logic circuits
> simply enriched with a new knowledge domain.
I imagine merging in the sense of competitive message routing protocol as I described in http://www.mattmahoney.net/agi.html
This is a form of trading resources (storage and bandwidth) where agents do not initially trust each other and must establish reputation (e.g. tit-for-tat). To the user, the systems appear to merge because the knowledge of both become available, and overall the system is more efficient (by allowing agents to specialize).
> - But it sounds like someone has to agree to die! For the
> better good of course and future generations of course. You're
> assuming that a prime
> directive of an AI is to win this race <I expect
> cooperation to happen
> because cooperating groups will have a selective advantage over
> non-cooperators>, is that it?
> Why would an AI merge? It's better to be assimilated
> than to be wiped out? Left Behind? Forgotten?
You assume self preservation is part of the agent's utility function. CMR has no such goal, at least as long as individual agents have subhuman intelligence and are administered by humans. (Collectively, the network has superhuman intelligence). Just as your cells undergo apoptosis for the good of the whole, group selection favors individuals who are willing to sacrifice themselves for the good of the group. We don't want to die because individual selection is faster than group selection. This unfortunate fact also makes animals susceptible to cancer, society susceptible to crime, and CMR susceptible to polymorphic worms.
-- Matt Mahoney, email@example.com
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT