From: Ben Goertzel (ben@goertzel.org)
Date: Sun Dec 08 2002 - 18:18:20 MST
> On Sunday, December 8, 2002, at 01:08 PM, Ben Goertzel wrote:
>
> > http://users.rcn.com/standley/AI/immortality.htm
> >
> > Thoughts?
> >
> > Can anyone with more neuro expertise tell me: Is this guy correct as
> > regards
> > what is currently technologically plausible?
>
> The Singularity and, specifically, FAI is a faster, safer way of
> transcending. Super *human* intelligence is highly dangerous. Think
> male chimp with nuclear feces. Unless you've got someone way protect
> the universe from the super *humans*, we're probably better off with
> our current brains.
>
> --
> Gordon Worley "Man will become better when
Gordon, this is an entirely different issue, of course.
I am not at all sure that a super human intelligence is more dangerous than
a would-be FAI intelligence.
Super human intelligence is to some degree a known quantity...
Nonhuman AGI intelligence is an uncharted domain, and in my view (as has
been copiously discussed on this list before!) it is very hard to guarantee
in advance that a nonhuman AGI will actually be Friendly.... And I look
forward to understanding more about this subject as true human-level AGI
gets closer...
ben
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:41 MDT