partial Transcendance?

From: Brian Phillips (deepbluehalo@earthlink.net)
Date: Mon Feb 26 2001 - 09:12:37 MST


Eliezar,
  I have a few questions on (heehee) what you think of the possibilities
inherent in superhuman AI are. I have been candid about some of the issues
that worry me. I'll also cast some of the questions in SF terms for the
general consumption.
  Much of the dialogue has concerned Friendly vs. Unfriendly AI.
My question is...is there any reason to believe a truly transintelligent AI
would be anything but Neutral and Remote (and from our perspective,
utterly Unknowable)? I realize this is a question with a dozen logical
contradictions bound up in it. Still I'd like your instinct.
  What reason is there to think a prehuman AI evolving inside a system at
hyper-speeds would even slow down to say "Bye" as they zoomed up
into the 300+ IQ equiv. range and onwards?
  I know from my personal experience that the one commonality to highly
intelligent people is "curiousity". Curious Primates have other things
pulling at them..they can't scratch the curious itch all the time, they have
jobs!
. Perhaps even more compelling than normal curiousity is the curiousity
about your own mental substrata. Your CaTAI doc suggests, very astutely,
 that the seed AI should be designed to rewrite it's own code. Isn't this
the
 functional equivalent of "curiousity" and a powerful desire to explore
oneSelf?
  What sort of "Supergoal" could be more powerful that the MetaGoal of
"Know and Improve Thyself ASAP".
(Granted one could dismasturbate about an AI converting the planet into
a solid mass of computronium in a mad rush "Upwards and Onward"
but my question is a basic functional one, not a horror scenario).

  Niven uses a similar situation as a plot device. The AIs lock(from the
human perspective of course) as they fall into Transcendance.
  Obviously Niven wanted a non-AI world to tell space-opera stories
in ..but what basic rational keeps hyper-evolutionary artificial
intelligences playing in the same basic league as us?
  Put another way Vinge deliberatly invented a galaxy where Powers and
humans neccessarily coexisted, because of the Zones. But he also "defined"
the half-life of a Singular intelligence as ten years. Is there any reason

to think that the half-life of a hyper-evolutionary AI isn't measured in the
milliseconds as they vanish into a techno-diety-ridden plenun? Is there
a general principle here I am missing because I want to miss it?

brian
d e e p b l u e h a l o



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT