From: James Higgins (email@example.com)
Date: Sun Jun 23 2002 - 11:46:49 MDT
At 11:21 AM 6/23/2002 +0200, Eugen Leiti:
>Somewhen within the next decades, probably less than a century, a team
>will build an intelligent seed that enters a positive autofeedback loop,
>There is a considerable gap between what a given assembly of (molecular)
>switches could do in principle, and what humans can make it do. A
>superhuman AI does not have this limitation, or at least not for long.
>This stage of enhancement can buy you a couple orders of magnitude on the
>same hardware base. Considerably more, if this is reconfigurable logic, as
>is to be expected at the time.
>Given the state of system security, the global network is sitting there on
>a silver platter, ready to be picked up. Here's your potential to expand
>your hardware base by eight to nine orders of magnitude within minutes to
>hours, without even trying. Instead of a single AI everyone for some
Well, we would hope that the team who created this AI doesn't give it
access to the global network! Or, if it does, it would be so highly
restrictive as to (at least for the near term, pre-super intelligence)
prevent such uncontrolled expansion. However, they could conceivably be
ignorant enough to grant such access or have a flaw in their security.
>strange reason assumes to be a given we're suddenly facing a population of
>realtime AIs well in excess of humanity's population. Due to
Or one very massive AI. I'd probably bet on a single, world-spanning AI
rather than it spawning billions of small AIs.
>co-evolutionary competition and population pressure the AIs will very soon
>start designing and building new hardware, which allows them to become
And just how would they make the leap from running on silicon to building
silicon? I'm almost certain that there is no capacity to do this
today. They would have to be able to perform 100% of the manufacturing and
assembly operation completely via computer, assemble the working technology
and connect it to the net. All without human intervention. Even if there
was a facility which had all of this capability computer controlled (which
I don't believe is the case, much of it is manual - moving pieces between
manufacturing workstations, etc) the operators would have to sit there
while the machines spent hours (days?) "doing their own thing".
>significantly superrealtime, about six orders of magnitude faster than
>before (~1 day : 3000 years). This would give them the edge over other AIs
>which chose not to, or were too slow. At this stage fabbing of new
>hardware (habitats, sensors, actuators, infrastructure) becomes the
>bottleneck, and pressure to expand it becomes ferocious, as is competition
>for new resources.
Once the genie is out of the bottle (super-human AI having self-controlled,
beginning to end, automated manufacturing), yes. But that is quite a leap
from Seed or Human-Equivelant AI, unless it is helped by the humans (given
access to the net and assistance in manufacturing).
>I could describe a few things which could happen at the physical layer
>within the course of days to weeks to illustrate above pretty abstract
>description (all organic material done, darkness, large structures
>everywhere, frantic activity at all scales on the ground and in the air,
>much too quick for human eye to see), but clearly we're completely out of
>our depth here. Even if no new physics is involved, which it very well
>At this stage humanity needs active protection in order to survive. Mere
>indifference doesn't cut the mustard.
This archive was generated by hypermail 2.1.5 : Wed Jun 19 2013 - 04:00:44 MDT