From: Ben Goertzel (firstname.lastname@example.org)
Date: Sat Jul 28 2001 - 20:26:19 MDT
> I understood the metaphor and the implied mapping of the metaphor to your
> point. It was the underlying point that I was disagreeing with.
> The point
> was, as I understood it, that artificial-AI/seed-AI was a better that
> augmenting humans based on some criteria.
I think that AI will be easier to achieve than significant human
This isn't a statement as to the *value* of either of these achievements,
just as to my estimates of their difficulty.
I also think that cognitive augmentation will be much more easily achieved
once we have real AI's to analyze neural and genetic data.
As for my goals, I have lots of them. My medium-term goal as a scientist is
to create real AI. This doesn't mean I think other goals are unimportant,
just that, as a single human being with strong interests in science, music
and literature as well as in having a life, ONE inordinately ambitious
scientific goal is enough. My secondary scientific goal is life extension,
as I'm rather interested in extending my own life indefinitely. But my hope
is to tackle that problem in 10 years or so with the help of my first real
-- Ben G
This archive was generated by hypermail 2.1.5 : Tue May 21 2013 - 04:00:20 MDT