Re: Sysop hacking

From: polysync@pobox.com
Date: Fri Feb 15 2002 - 08:49:03 MST


 In retrospect, I could have replaced most of my last posting with
"intelligence will always be fallible."

>> This is based on my assumption that at any one moment there is an upper
>> limit on the number of resources that can be constructively committed to a
>> single mind.)
> What is a mind?

 I don't know. How about: you can distinguish multiple minds from a single
mind because multiple minds can arrive at different conclusions regarding the
topics I posted earlier, and because different minds know different things. A
single mind will have a single conclusion or no conclusion, but not several
disagreeing conclusions.

> ...a Sysop composed of differentiated and specialized [X], each self aware
> and continuously engaging in intense information exchange with other
> Sysop-ian [X]

 I replaced a word with X, an unknown variable. If you set X to "people posting
to a mailing list" then you might be describing SL4. Other values would
describe the US federal government, or a corporate structure, or the pieces of
my brain working together. The Sysop I see in (read into?) the FAQ is one of
the more tightly coupled examples. It reaches single conclusions on subjects,
and acts uniformly on those conclusions. Not a problem if it has peers that
are allowed to arrive at other conclusions.

> In a googlebyte-size Sysop, I find it hard to believe that ve would only have
> one mind... so, a more likely outcome, in my opinion, would be a Sysop
> composed of differentiated and specialized subprograms, each self aware and
> continuously engaging in intense information exchange with other Sysop-ian
> subprograms and perhaps even a main brain.

 I think a more likely scenario is autonomous agents working locally and
independently, to achieve some global effect. Like an ambulance and fire-truck
on every corner. This setup would not be able to solve some global problems
that span localities. It does soothe some of my monkey-brain fears.

> And if we don't think SI's will get ontotechology, then there must be a mass
> limit to how dense/large a computing/emulation centre is.

 I'm thinking about knowledge propagation across a distance and time to
convergence, and the inability to know everything (that's going on), since you
can't be everywhere. I don't see how onto-technology would change the fact
that your mind and mine are located in different places.
 Maybe there won't be any size limits if it turns out that all actions a mind
has to respond to happen at speeds significantly less than thinking speeds and
knowledge propagation speeds.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT