From: Kevin Osborne (kevin.osborne@gmail.com)
Date: Thu Feb 02 2006 - 08:52:34 MST
*...copying/pasting in a palsied effort to be OT
[snip]...examples of what I might in the course of developing 'workhorse'
AGI
code need to do:
- write code that will parse the pixels of a scanned exponential
graph, generate the data model it represents and call various math
functions against the result
- fire off a gazillion threads on multiple servers across the globe
which share resources and scheduling to achieve a given computation
result; the initiation of which can instantaneously load and run new
code segments without requiring user tweaking of the distributed
clients (e.g. stopping the target application server on the
distributed client, updating the class library and restarting - I'd
consider this to be a pretty severe disadvantage)
- parse the source code and OS/chip specific binaries of all known
languages to define a functional model that can be applied against
mathematical proofs and then regenerated in any other language
- text transliteration, conversion and parsing supporting unicode
languages and all known file formats
- web/net spidering to grok every online resource possible via every
known protocol/stream; tcpip, p2p, X.25, SOAP, SNMP, codecs, distillers,
scrapers etc
- mature persistent storage support incl. database connection pooling
and driver/api support for the big rdbms', LDAP,bdb etc
what this comes back to is that I want to leverage the
(give-me-a-better-word-for) infosphere. I want my workhorse code to provide
interfaces for sensors/input streams that will grok and manipulate the
greater planetwide digisphere/infosphere/insert-s
ummary-word-for-all-the-bit-stacks-everywhere-and-the-things-you-c
an-do-with-them.
I believe that it will be absolutely necessary to have the 'brain' code. But
I don't think sitting down and coding up a bunch of sexy self-replicating
algorithms in your academically superior but API library deficient
programming is the way to go about it; at least not initially, not for me.
If we can get code that does all and more of my wishlist above, then we have
filled in a pretty important gap; _input_. i.e.
+-------+ +------------+ +--------+
| input |-->| processing |-->| output |
+-------+ +------------+ +--------+
which in our case means:
+--------------------+ +-----+ +---------------------------------+
| everything digital |-->| AGI |-->| stuff that a human would output |
---------------------+ +-----+ +---------------------------------+
I think the right way to go about this is to do the input stage first, or at
least in parallel; because if we can simulate the external sense-world of
our future AGI, then I think we have an easier path to maybe simulating
_him_.
In this way we can provide a framework in which the psynet guys can drop in
their algorithms, pattern builders and
other-stuff-im-way-too-n00b-to-comprehend and start seeing some output.
We can refine the lossiness of our inputs over time until we can be sure
that we're providing the non-workhorse 'thinking' code everything it needs
to be able to simulate the best observations we as humans can make.
I think we can then tweak the little buggers' brain until he starts to think
like we want him to; and then maybe we'll be getting somewhere.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:55 MDT