From: Samantha Atkins (samantha@objectent.com)
Date: Wed May 19 2004 - 23:04:29 MDT
On May 19, 2004, at 9:56 AM, Christopher Healey wrote:
> Well, FAI seems more a discipline of risk mediation than anything else.
>
> What Eliezer seems to be saying is that any black-box
> emergent-complexity solution is to be avoided almost without
> exception, because if you don't understand the mechanism then you've
> ceded precious design-influence you could have brought to bear on the
> task. You could say that, when it comes to self-learning
> emergent-complexity, we understand a little bit about how to implement
> something we cannot predictably model though generalization. At least
> that is my impression, based on my limited experience.
I see no reason to believe that even the best of human brains has
sufficient capacity to actually understand fully even the mechanism of
human brains and thought much less understand fully a superhuman
intelligence. I actually find it incredible anyone would suggest
otherwise. Being incredible, such a suggestion deserves very strong
evidence and argument to be at all believable. Thus far this is not
forthcoming. The best tools I am aware are not remotely up to such a
task in the hands of even the most gifted humans or human programmed
computers. We may well be able to build a seed AI, but I believe it is
sheer fantasy that we will be able to fully understand it or what it
can and may do.
>
> So to minimize the chance of an undesireable outcome, we should use
> techniques we CAN predict wherever possible. Even in the position
> where we lack any theoretical basis to actually quantify the assurance
> of predictability, we can take actions that trend toward
> predictability, rather than use techniques that are known to reduce
> it. In other words, by maximizing the surface-area of our predictable
> influences against the rock-hard-problem of FAI, we're more likely
> push it in the direction we seek. Using a primarily emergent approach
> is more akin to randomly jamming an explosive charge under that rock
> and hoping it lands exactly 1.257m from the origin along a specific
> heading.
Actually you are most likely to remove all that allows the
seed-intelligence to fully develop if you could successfully remove all
such possibilities plus all possibilities of it discovering them on its
on. But then, I don't believe that we are competent to do so.
>
> The problem is still there, of course. Warren Buffet has a first rule
> for getting rich: Don't lose money! In the absence of statistically
> assured predictability, a similiar path here seems prudent. Don't
> knowingly cede predictability!
Hell, we cede predictability every day in order to get solutions to
real problems that work reasonably well. Full understanding usually
arrives some time after workable techniques have been in use for some
time, if then.
- samantha
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:46 MDT