From: Samantha Atkins (samantha@objectent.com)
Date: Thu Sep 12 2002 - 23:42:12 MDT
Eliezer S. Yudkowsky wrote:
> Ben Goertzel wrote:
> >
> > Mapping a person's words into a model of their future words is not that
> > easy. Essentially, it requires mapping their words into a model of
> > their mind. This is a very hard "inverse problem." I require more
> > data in order to believe I have a reasonably good model of someone,
> > than you do. Those 2 messages of GK did not give me enough data to
> > form a plausibly confident model of the guy's mind.... Either you're
> > way better at mind-modeling based on scanty data than I am, or you're
> > just quicker to jump to conclusions in this regard...
>
> Or I'm judging against a different criterion than you use, one with a
> much narrower aperture, so that available data was sufficient to
> determine (with margin for error) that the model thus mapped was outside
> the aperture.
Please make your criterion explicit. I don't think it is wise
to indulge yourself with being cryptic about this.
>
> PS: Cliff, the fact that Ben and I are speaking cryptically should be
> enough information for you to determine what we are speaking about.
> Hint: It has nothing to do with religion. (Unless I've mapped Ben
> incorrectly and he *is* talking about religion.)
>
> Crypticism can sometimes be very useful, and I won't tell you when, but
> it's *always* fun.
>
Well, if you want to keep people guessing and believe that is
"fun".. or if you believe that they somehow get smarter by
attempting to model your mind with less information.. or ???
- samantha
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:41 MDT