From: Charles D Hixson (firstname.lastname@example.org)
Date: Wed Jun 07 2006 - 16:41:21 MDT
>> ===== Original Message From Charles D Hixson <email@example.com>
>> pdugan wrote:
>>> I think the release of a serious simulation or commercial game of adequete
>>> craft and marketing could cause 2 or 3 to happen, respectively. What better
>>> way to get people to grok the singularity than have them interact with a
>>> computer that simulates and/or represents the core ideas?
>>> Patrick Dugan
>>>> ===== Original Message From Joshua Fox <firstname.lastname@example.org> =====
>>>> Anyone want to venture a guess on the public awareness of the
>>>> Singularity in, say, 2010 or 2015?
>>>> I'm wondering if the Singularity will
>>>> (1) remain the province of a few hundred or a few thousand
>>>> super-technologically-aware types (like, e.g., hints of possibilities
>>>> for faster-than-light spaceflight today);
>>>> (2) or spread into the awareness of hundreds of thousands or millions of
>>>> educated people (like, e.g. private space travel or nanotech today);
>>>> (3) or become a major social issue followed by tens or hundreds of
>>>> millions (like, e.g., genetically engineered food or nuclear power today).
>>>> Any thoughts?
>> One way to acquire sufficient computational resources would be to
>> publish an on-line game (or community? 2nd Life?) in which much of the
>> computation was done by what is essentially voluntary participants in a
>> bot-net. This would be even better if the game could be so structured
>> that portions of the game score depended on solving challenges that were
>> those needed by the program's operation. (Since these would be "high
>> level" challenges, they should have high rewards attached to them.
>> There would, after all, be a large probability of failure.)
>> Is this feasible? ?? Is 2nd Life involved in such a scenario? ?? (I
>> tend to think not.) But do note that a lot of the time in these games
>> the computer is essentially waiting for the user to react. This time
>> represents available CPU cycles that don't need to be merely discarded.
>> (Of course, with a lot of designs the OS already uses them for
>> background processing.)
> Actually I was thinking of something more along the lines of a "massively
> single-player" game where users different reactions to a partially simulated,
> but mostly representational hard take-off would yeild different post-singular
> futures, so that the end-game is, to an extend, user created content. Then
> people could put this stuff up and compare notes and singularity awareness
> would generally improve for the better. The upcoming Spore might have the same
> effect on people's awarenss of xenobiology concepts.
> Second Life isn't really a challenge oriented user environment; the only real
> goals are aesthetic, with the result being a heterogeneous environment of user
> created content. Since a desirable post-singular future could probably be
> described as a heterogeneous environment of user created content, it makes
> sense to see a soft take-off being facilitated and "padded" by successive
> generations of such platforms. I don't know if designing a system that used
> human play to teach an AGI (if that's what you're getting at) is desirable or
> feasible. Its not feasible because if the virtual world doesn't have
> commercial entertianment appeal, you'll never get funding for it short of
> DARPA taking a very serious and creative stance on AGI, and its not
> nessecarily desirable because AGI pattern recognition isn't going to be mature
> enough to handle such an environment in early stages.
Well, that wasn't what I meant...but it's an interesting model for a
more advanced version. Create the entity as a character in the game or
community, and have it interact with the community to practice
interacting with people. This clearly depends on an environment where
there are lots of people acting rather "normally".
But what I was actually thinking of was using the game in the same way
that "bot-networks" use the computers that they zombify. The
differences would be that you would only do this while they were
playing/participating, and that you would be doing this with the full
knowledge of the players. I don't see this as requiring that the "game"
be challenge oriented. A community seems, if anything, a better
environment. It might even facilitate the learning of the meanings of
words: the entity would "hear" the words being used in a context that
it would be fully informed about. There are even precedents, NPCs, that
have lowered the expectations so that if the NPC acted in ways that
weren't too intelligent, then this would be accepted, and as it learned
to act more intelligently and more helpfully, THIS would be accepted.
A NPC "passing" for a player would be a kind of a milestone. (But don't
take that too seriously, remember that Eliza once fooled a professor
into mistaking it for a human...resulting a a very angry professor.)
> Using a infra-human AGI to both direct and learn from such an environment as
> part of its sensory modality isn't a bad idea though, something to consider
> maybe five years down the line.
> I'm not sure if Ben's optimism regarding parrallel hardware is accurate or
> not, but I don't think human support can make up for raw computation, but
> human support can aid pattern learning significantly.
Parallel hardware is very important, but you pay a high price. Syncing
up costs a lot, so you need to be very careful how you divide things.
If you are doing neural networks, I presume that there is some more
obvious approach, but for me ... justifying a forked process is a tough
thing to do on my current system. Forking is relatively expensive, and
I only have a couple of processors anyway. But if you chose your fork
points properly you don't lose much. Low level parallelism, however,
becomes very inefficient on standard processors. I suspect that any
approach that depends on it is going to need to structure itself
hierarchically, though each "level" of nodes will need lots of sideway
communication within it's local group (ganglion? column?). Then you
need error checks, etc. Predictions help here. If you don't see what
you predicted you would see, then you need to check for errors. If you
do, you can probably skip that step. (This is sometimes a mistake...but
people seem to work that way, so it's probably a reasonable cost to pay.)
Note that one of the main functions of the clustering is to limit the
amount of interprocessor communication required in a parallel system.
If parallel systems weren't VERY important, there's no way that anybody
would pay for this kind of overhead.
However, silicon chips are a LOT faster than wetware...and a lot more
expensive .. and emit more heat. This probably implies that the ideal
design will be different, with more processing at each "cell", and less
parallelism. NOT none. And this is a design decision that is sensitive
to the costs of various ways of doing things, so if CPU prices plunge,
and massively parallel systems become more common, the "best choice"
will be different. It would, however, require a very unexpected
technology change to make the fastest AI design look much like the
design of the human brain. (Parallelism, yes, but there are reasonable
OTOH, we already know that at least one neural network design can
reliably produce intelligent behavior. There's a certain amount of
sense in going with something that you have good reason to believe will
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT