RE: Humane-ness (resend due to addressing error)

From: Ben Goertzel (ben@goertzel.org)
Date: Thu Feb 19 2004 - 22:25:41 MST


Hi,

> I didn't get the impression from CFAI that Eliezer thought the
> massively iterated pure self-modification should happen before the
> concepts and goal system reflected a high degree of convergence. His
> position seemed to represent that if we get the structural
> consideration correct, then given accurate information, the AI should
> voluntarily refrain from such self-enhancement until it's "predicted"
> self-enhancements are regularly MUCH better than the best programmer
> enhancements, and ALL of the AGI's "practice" design attempts are
> analytically approved by the programmers on an ongoing basis. Plus a
> margin of safety.

I agree with all of that... I just don't think it gets you very far...

> We probably STILL can't conclude whether or not that convergence will
> be maintained into transhumanity ,

This is my problem. I don't think that "keep doing stuff I think my
trainers would like" is the best principle for an AI ... it doesn't
generalize well beyond the human domain. It's a good principle but needs to
be supplemented with more abstract principles guiding it to figure out what
to do when there's no clear analogy with past situations in which it knew
what its trainers wanted...

> >From my academic experience, most of the minds that COULD be
> contributing to the theory are pretty much doing all they can, on
> information overload, to filter out the crap from their respective
> field. That more or less results in ignoring content not phrased in
> readily processible terminology. So as I see it, one of the biggest
> challenges to the disemination of Friendliness Theory is a matter of
> avoiding an easy misclassification by those individuals' "efficiency
> measures".

I agree there.

Although I don't fully agree with Eliezer's ideas, I have paid a fair amount
of attention to them because I do think they reflect some deep and
high-quality thinking... but I have not succeeded in getting many of my
academic or industry colleagues to take them seriously, because Eliezer's
style of exposition is very eccentric by contemporary mainstream standards.
I don't personally find his style of exposition BAD -- I like reading his
stuff -- but it's definitely eccentric, and it generally seems to appeal
more to laypeople with futurist inclinations than to formally-trained
scientists...

-- Ben G



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:45 MDT