Re: How hard a Singularity?

From: James Higgins (jameshiggins@earthlink.net)
Date: Sat Jun 22 2002 - 19:34:51 MDT


(last post for awhile - I promise; see what happens when I don't read the
list for a year...)

At 06:35 PM 6/22/2002 -0400, you wrote:
>Ben Goertzel wrote:
> >
> >> That there is *nothing special* about human-equivalent intelligence!
> >
> >
> > Wrong! There is something VERY special about human-level intelligence, in
> > the context of a seed AI that is initially created and taught by humans.
> > Human-level intelligence is the intelligence level of the AI's creators
> > and teachers.
> >
> > Our seed AI is going to get its human-level intelligence, not purely by
> > its own efforts, but largely based on the human-level intelligence of
> > millions of humans working over years/decades/centuries.
>
>Ah. Well, again we have very different models of intelligence. I don't
>think you can use human knowledge as the mindstuff of an AI. I don't
>think AI can be built by borrowing human content. What we're after is
>what produced that content. One is building brainware and, in the early
>stages, guiding that brainware into creating content. There may be some
>opportunities to nudge the AI into creating better content than she
>otherwise would have, but this violates the self-encapsulation rule of
>seed AI; an AI doesn't own a thought until she can produce it on her own.
>Initially almost all of AI development will violate this rule, of course,
>but the measure of how far you've gotten in constructing real, independent
>AI, an AI that has her own thoughts and isn't just churning through yours,
>is not having to do this any more.

I think a good model to use is humans. The human mind has been virtually
unchanged for thousands of years, yet we make much, much more progress in
one day today than was made in thousands of years 10,000 years
ago. Why? Becuase we, collectively, know more and are taught more in our
early years. Does this make a human from today smarter than a human who
lived 10,000 years ago? I guess that would depend on your criteria.

In order to educate the AI you must teach it using human knowledge. Unless
you expect it to go from nothing to high-level and assembly programming on
complex microprocessors on its own (hope you've figured in decades for
that). "Human knowledge" is the basis and springboard from which an AI
will start out. You could have an AI that was 4x as intelligent as a
human, but if it starts with a completely blank slate it could take
centuries to just catch up to our present knowledge. Intelligence without
knowledge is useless.

>So you look at pouring human content into an AI, and say, "When we reach
>human-level, we will run out of mindstuff." And I look at creating AI as
>the task of building more and more of that essential spark that *creates*
>content - with the transfer of any content the AI could not have created
>on her own, basically a bootstrap method or side issue - and say: "When
>the AI reaches human level, she will be able to swallow the thoughts that
>went into her own creation; she will be able to improve her own spark,
>recursively."

You wouldn't run out of human knowledge until well after the AI exceeded
human-level intelligence (no one human can know even 1% of all that is
known today).

>(Hm. I think ve/vis/ver was better after all.)

Many of us, however, don't.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT