RE: How hard a Singularity?

From: Ben Goertzel (ben@goertzel.org)
Date: Sun Jun 23 2002 - 12:18:37 MDT


James,

As Eli has correctly pointed out in some of his past writings, a human-level
AGI may well be a *lot* more amenable to self-modification than an uploaded
human.

A human-level Novamente certainly would be.

The big difference is that, being an engineered rather than evolved system,
an AGI is likely to have a more elegant and comprehensible (though still
somewhat messy and difficult, of course) design than an uploaded human

This doesn't make self-modifying AI a panacea, but, it suggests that an AGI
*may* have an easier time usefully self-modifying than an uploaded human...

-- ben g

James Higgins wrote:
> At 11:37 AM 6/23/2002 +0200, Eugen Leitl wrote:
> >On Sat, 22 Jun 2002, Michael Roy Ames wrote:
> > > been thinking about "what I'm gonna do after I upload" for a long time
> >
> >Okay, you're uploaded. I'm giving you full r/w access to /dev/mem (which
> >is a very large bitvector and about every bit flips on microsecond scale)
> >and the full GNU tool suite. Do your worst.
> >
> >Sure, that was unfair. Wait, here's your 3d voxelset image
> describing your
> >brain at ultrastructure scale with full dynamics. For all practical
> >purposes you can assume a 1:1 mapping to physics (transmembrane gradient,
> >ion gating action, diffusion, genome network, protein interaction matrix,
> >etc). The toolbox you're given involves zooming, complete r/w access to
> >the 3d array, etc.
> >
> >What are you going to do with this? How long do you think you need to
> >build the tools to analyze this, and how many virtually drooling idiots
> >are going to document your failures in self engineering?
>
> I agree completely. I also believes this applies to a human-equivalent
> AI. Progress can, and will, be made but it won't be incredibly fast (at
> first).
>



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT