Re: AGI motivations (Sidetrack on Uploading)

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Mon Oct 24 2005 - 13:22:23 MDT


Richard Loosemore wrote:
>
> Michael Wilson wrote:
>
>> Michael Vassar wrote:
>>
>>> Trying to build a human derived AI is the general case of which
>>> building a human upload is a special case... I'm skeptical about the
>>> diferences being unavoidabld. Surely true given current computers,
>>> surely false given uploads. There is a substantial space of possible
>>> AGI design around "uploads".
>>
>> This is true in principle, but not in practice. Yes, uploads occupy
>> a small region of cognitive architecture space within a larger region
>> of 'human-like AGI designs'. However we can actually hit the narrower
>> region semi-reliably if we can develop an accurate brain simulation
>> and copy an actual human's brain structure into it.
>
> Okay, I want to make a (probably futile) attempt to steer the
> conversation away from questions of uploading, because I think we are
> casually using the term "uploading" as it were technically feasible, and
> the state of the art is so far away from that at the moment that we are
> in danger of wasting our breath.

That has always been the reason why I myself have attached little
likelihood to uploading, and have focused my attention elsewhere. It's
not just that it's presently infeasible, but that the technology and
science for (not necessarily Friendly) AGI appears to be very nearly a
strict subset of the technology and science for uploading. No matter
when uploading would have come along, (not necessarily Friendly) AGI
will come first and terminate the problem, one way or another.

-- 
Eliezer S. Yudkowsky                          http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:23:18 MST