From: Richard Loosemore (rpwl@lightlink.com)
Date: Wed Dec 14 2005 - 09:52:02 MST
I am entirely at a loss to know how what you just wrote bears on the
text that I sent to the list.
Did you mean that what I wrote was in the category of
> <--snip--> // astute points about poorly defined concepts that are
> meaningless upon deeper inspection
What *I* wrote was meaningless upon deeper inspection???
And I am not sure what your suggest below is directed at.
Clarification?
Richard Loosemore
Jef Allbright wrote:
> On 12/14/05, Richard Loosemore <rpwl@lightlink.com> wrote:
>
>>Michael Vassar wrote:
>>
>>>Some posters seem to be very seriously unaware of what was said in
>>>CAFAI, but having read and understood it should be a prerequisite to
>>>posting here.
>
>
> Since this is in fact the SIAI discussion list, it seems proper and
> reasonable that those who exploit this forum be familiar with its
> background, whether they agree with it or not.
>
> I've been watching Eliezer's thinking in public for over ten years,
> and have been encouraged by the observation that over the last year or
> two he has forthrightly stated and shown that his understanding
> evolves, as in general it does for all of us. [More on this point
> below.]
>
> <--snip--> // astute points about poorly defined concepts that are
> meaningless upon deeper inspection
>
>> "Maximize the utility function, where the utility function
>>specifies that thinking is good"
>>
>>I've deliberately chosen a silly UF (thinking is good) because people on
>>this list frequently talk as if a goal like that has a meaning that is
>>just as transparent as the meaning of "put the blue block on top of the
>>red block". The semantics of "thinking is good" is clearly not trivial,
>>and in fact it is by no means obvious that the phrase can be given a
>>clear enough semantics to enable it to be used as a sensible input to a
>>decision-theory-driven AGI.
>>This is infuriating nonsense: there are many people out there who
>>utterly disagree with this position, and who have solid reasons for
>>doing so. I am one of them.
>
>
> [more on evolving understanding]
> I am increasingly frustrated by the observation and experience that on
> the various transhumanist lists, a graph of posting frequency by
> individual would show a hump, with those who have enthusiastic but
> relatively incomplete thinking posting far more than those who have
> more coherent views or those who may not have strong views at all.
>
> This is to be expected, but it tends to promote regression to the mean
> more than leading-edge growth.
>
> Creative growth requires diverse input, but mining these transhumanist
> lists for nuggets of leading edge ideas, or planting seeds of thought
> when a fertile opportunity seems to be presented, provides such sparse
> payback that I and many others question whether we've long since past
> the point of diminishing returns.
>
> I think we've reached the point that we need new tools--an improved
> framework--for collaborative thinking including concept mapping,
> argument mapping, and shared knowledge that goes qualitatively beyond
> the wiki and the email list. I don't have the available bandwidth to
> create this or even to organize such a project, and I see it coming
> just around the corner with all the growing awareness of "web 2.0" and
> social software, but I would be very interested in contributing some
> time and resources to such a project.
>
> - Jef
>
>
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:54 MDT