From: Eliezer Yudkowsky (email@example.com)
Date: Sun Jun 13 2004 - 15:58:04 MDT
Samantha Atkins wrote:
> On Jun 13, 2004, at 7:56 AM, Eliezer Yudkowsky wrote:
>> Samantha, you write that you might have a badly warped view of what
>> kind of person you would like to be. "Badly warped" by what criterion
>> that I feed to the FAI? Your criterion? Someone else's? Where am I
>> supposed to get this information, if not, somehow, from you? When you
>> write down exactly how the information is supposed to get from point A
>> (you) to point B (the FAI), and what the FAI does with the information
>> once it's there, you'll have something that looks like - surprise! - a
>> volition-extrapolating dynamic. It's not a coincidence. That's where
>> the idea of a volition-extrapolating dynamic *originally comes from*.
> That is my point. The information is not necessarily available from the
> person[s] in sufficient quality to make wise decisions that actually
> work for the good of humanity.
Okay... how do you know this? Also, where do I get the information? Like,
the judgment criterion for "wise decisions" or "good of humanity". Please
note that I mean that as a serious question, not a rhetorical one. You're
getting the information from somewhere, and it exists in your brain; there
must be a way for me to suck it out of your skull.
>> It's more an order-of-evaluation question than anything else. I
>> currently guess that one needs to evaluate some "knew more" and
>> "thought faster" before evaluating "more the people we wished we
>> were". Mostly because "knew more" and "thought faster" starting from
>> a modern-day human who makes fluffy bunny errors doesn't have quite
>> the same opportunity to go open-endedly recursively wrong as "more the
>> people we wished we were" evaluated on a FBer.
> Well, we all can make whatever assumptions we wish for what "knew more"
> and "thought faster" would and would not remove or add to in our bag of
> human characteristics.
I realize it's currently a subjective judgment call, Samantha, but it looks
to me like "knew more" and "thought faster" tend to preserve underlying
invariants in a way that "more the people we wished we were" does not
*necessarily* do (or at least, the necessity is subject to decision).
Like... if I'd had access to, and been foolish enough to use, "more the
people we wished we were"-class transformations, early in my career, I
would have screwed myself up irrevocably. While thinking longer and
learning more cleared up at least some of my confusion, I hope. That's why
I would tend to put "knew more" and "thought faster" earlier in the order
>> Right. AI augmented self-improvement of humankind with the explicit
>> notation that the chicken-and-egg part of this problem is that
>> modern-day humans aren't smart enough to self-improve without stomping
>> all over their own minds with unintended consequences, aren't even
>> smart enough to evaluate the question "What kind of person do you want
>> to be?" over its real experiential consequences rather than a small
>> subset of human verbal descriptions of humanly expected consequences.
>> So rather than creating a *separate* self-improving humane thing, one
>> does something philosophically more complex and profound (but perhaps
>> not more difficult from the standpoint of FAI theory, although it
>> *sounds* a lot harder). One binds a transparent optimization process
>> to predict what the grownup selves of modern-day humans would say if
>> modern humans grew up together with the ability to self-improve
>> knowing the consequences. The decision function of the extrapolated
>> adult humanity includes the ability of the collective volition to
>> restrain its own power or rewrite the optimization function to
>> something else; the collective volition extrapolates its awareness
>> that it is just an extrapolation and not our actual decisions.
> Excellent. I get it!
>> In other words, one handles *only* and *exactly* the chicken-and-egg
>> part of the problem - that modern-day humans aren't smart enough to
>> self-improve to an adult humanity, and that modern-day society isn't
>> smart enough to render emergency first aid to itself - by writing an
>> AI that extrapolates over *exactly those* gaps to arrive at a picture
>> of future humankind if those problems were solved. Then the
>> extrapolated superposed possible future humankinds, the collective
>> volition, hopefully decides to act in our time to boost us over the
>> chicken-and-egg recursion; doing enough to solve the hard part of the
>> problem, but not annoying us or taking over our lives, since that's
>> not what we want (I think; at least it's not what I want). Or maybe
>> the collective volition does something else. I may have phrased the
>> problem wrong. But for I as an FAI programmer to employ some other
>> solution, such as creating a new species of humane intelligence, would
>> be inelegant; it doesn't solve exactly and only the difficult part of
>> the problem.
>> I may end up needing to be inelegant, but first I want to try really
>> hard to find a way to do the Right Thing.
> Thanks. That clears up a lot.
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT