Re: [SL4] Employment vs. Singularity

From: Samantha Atkins (samantha@objectent.com)
Date: Tue Aug 22 2000 - 09:36:12 MDT


"Eliezer S. Yudkowsky" wrote:
>
> Samantha wrote:
> >
> > If, just as a for instance I don't know how to get to, if every person
> > had a decent living wage and all the physical necessities taken care of
> > (a la cheap nano-tech matter assemblers for instance), then every person
> > can be quite busily employed - doing exactly what they themselves find
> > most meaningful and interesting regardless of whether they are getting a
> > conventional paycheck for it.  Personally I would have a hell of a lot
> > more to do than watching TV if I no longer had to work for someone else
> > for a living.  I have more things to do and explore and work on than I
> > could finish in a hundred lifetimes of such 'pointless empty leisure'.
>
> How about if everyone who wished was uploaded and had a decent amount of
> computing power - at least a billion trillion brainpower, say?
>

YEAH.  That's what I'm talking about!  Heading out toward unlimited
where bean-counting and chasing of dollars makes utterly no sense.  And
some of that senselessness may be in the way of us getting there quickly
enough.

Sign me up!

> This scenario is basically the minimum SingInst is aiming for.  With
> luck, we won't see any major effects on the economy from any prehuman
> AIs that reach the market before this point.  (With less luck, there are
> a few economic tweaks that could stabilize things; but that whole
> scenario can ideally be avoided.)
>

Interesting.  What do you mean to designate as "prehuman AI"?
Pre-augmented human AI?  What do you believe is essential in stabilizing
the economy and what can and should change and why?


> You seem to be visualizing a slow, gradual, planetwide scenario.  The
> Slow Singularity model has all kinds of potential for ultratech
> disaster, which is one reason why the SingInst model consists of one AI
> in a research lab reaching the point of true self-enhancement and taking
> off from there.  The marketable AIs before that point will hopefully be
> too dumb to have any major effect on the economy.
>

Not particularly slow but definitely having a few stages quite
noticeably different from now before we get to the full SingInst model.
I hope there are a few stages along the way for the simple reason that
even pro-Singularity people are not imho generally well equipped for
such a leap in a single go.  On the other hand, my time grows shorter
daily and I am quite anxious to be on  with it.

I do not believe in a single AI in a research lab being a viable model.
No matter how brilliant any one team is I don't for an instance believe
they can go the whole way.  Nor do I believe that one and only one such
AI is remotely likely or desirable.  I don't think you can get there in
one jump even in the AI world (perhaps especially in the AI world).  I
think you will need an interacting mass of AIs bumping up against the
world, us and each other before they bloom into full AI power.    And I
think several of the self-enhancement capabilities will grow out of
things needed in the economic sphere.  For instance, I think
self-modifying, self-improving and self-writing software will grow
fairly directly out of the dearth of programming skills and the
increasing complexity and criticality of our software systems. 

> About the friendly AI thing:  Complex issue; we think we know how to
> handle it; no, it doesn't involve Asimov Laws or gratitude to creators
> or anything else anthropomorphic; yes, given everything we know about,
> it should work; no, success isn't certain; yes, we're pretty sure the AI
> scenario offers the best available probability of survival for the human
> race.

Is the AI the cosmic guardian and caretaker or do we become one with it
or do we become it or some mixture of these and other possibilities? 

- samantha

Don't just travel. Travel right.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT