From: Krekoski Ross (rosskrekoski@gmail.com)
Date: Wed Jul 23 2008 - 11:01:54 MDT
I find it significantly more interesting, to add to this, that the
current fear around out of control AI parallels quite closely much of
the fear surrounding on one hand, likely ecological scenarios due to
overexploitation, and on the other hand, fears regarding the formation
of a big brother state marked by extremely one-sided power relations
and the loss of personal autonomy-- both quite contemporary fears
attributable to current events and the current paradigm of social
relations.
One could, if one were so inclined, draw parallels to (not always pop)
science fiction in the 50's and how fears regarding alien
intelligences closely paralleled contemporary fears of communism.
Ross
On Thu, Jul 24, 2008 at 12:51 AM, Krekoski Ross <rosskrekoski@gmail.com> wrote:
> On Wed, Jul 23, 2008 at 6:59 PM, B Ziomek <bziomek@gmail.com> wrote:
>>
>> Physic humans with minimal upgrades or direct uploads: I'd say that if a
>> singularity of any kind does occur there will be some intelligences left
>> which are regonizable in cognizance. Whether existing in a sandbox
>> simulating the current world or fleeing the inner system, who knows, but I
>> find it extremely unlikely that any AI wouldn't want to keep at least a few
>> of its creators around, at least in a stored form for study.
>
> fair enough.
>
>> I may be making
>> a dangerous assumption here, but it seems unlikely that a logical being
>> would destroy something that may come in useful in the future in exchange
>> for a comparatively miniscule amount of storage space (heck at worst our
>> genome is what, 5 MB? at least the plans to make a human would be seemingly
>> guaranteed to survive and singularity).
>
> True, but again, 5MB of complexity (actually less) is miniscule
> compared to the amount being generated daily by any individual.
> Granted if we were to abstract away the generated complexity of all of
> humanity we may get some redundancy (I dont know, it depends on what
> philosophical framework one draws upon) but certainly it should be
> more than 5MB.
>
>>
>> But beyond this, how much will our true successors resemble today's
>> humanity? I think it's impossible to make a concrete guess, but that the
>> lower boundary of similarity is pretty far away.Uploaded humans will just be
>> emulations of running on vastly superior hardware. As anyone who's ever run
>> a hardware emulator on a modern computer knows, the amount of processing
>> power it takes is usually far larger than the processing power the hardware
>> being emulated had, and hence the process is very inefficient.
>
> I argued a similar point a few months ago-- If humans were to be
> emulated perfectly, the hardware running the emulation would be
> necessarily more complex (Kolmogorov) than the emulated reality, down
> to the quantum level if any quantum effects have any effect on random
> input available to individual humans (chaos theory applies, I'm not
> talking about quantum effects in the brain, but the non-predictability
> of, for example, the microdynamics of thermal systems). It would be
> thus more efficient to observe, rather than emulate the system. A
> vastly superior intelligence would certainly be aware of this fact,
> and if it is at all interested in self-improvement (defined presumably
> as an increase in complexity and cognitive capability since this type
> of goal would most probably fall near to the target area the initial
> designers aimed for) then it may very well wish to preserve humanity
> (as broadly construed by this list) in some recognizable form simply
> because it allows for a diversity of direction. I'm talking here about
> direction in terms of the likely evolutionary paths of human systems
> of thought that a preserved humankind would take as distinct from ones
> an isolated AI would take-- it is an advantage to have a diversity of
> intelligent systems if self-improvement is an end goal.
>
> Its somewhat analogous to the argument that biodiversity is a
> scientific (and thus ethical, as well as economic) end to itself. The
> only reason we harvest and exploit the biosphere to the extent we do
> now is because we're tremendously short-sighted.
>
> At the very
>> least the uploaded humans would modify themselves to execute more natively
>> on whatever hardware exists, and once the self-tampering starts, when will
>> it stop?
>>
>
> Yes self modification is fine. But we don't really know how that'll
> turn out yet. Kinda like chimps trying to predict the stock market.
>
> Ross
>
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT