From: pdugan (email@example.com)
Date: Fri Jul 22 2005 - 23:42:18 MDT
Ah, si si si. I'm argueing the exact same thing Ben. Basically: Talking about
risk (intelligently) marginalizes risk. I'm not suggesting anything like the
Big Brother AI you describe in your essay. I'm thinking more of a Forum
moderator with benigan, perhaps weakly transhuman intelligence, where the
forum consists of the entire internet. If we could enable, via non-invasive
nanotech, direct telepathy between persons, with an AGI as moderator, that
would be hot. But even if we have a bunch of weakly transhuman eggheads (in
the sense of cutting edge cognitive processes constrained by human clock
speeds) talking over the internet, that is the beginning of a "Global Brian"
working to survive and prosper. The more we interact, the more our fears
contract. I'm like a white Johnny Cochrane!
>===== Original Message From Ben Goertzel <firstname.lastname@example.org> =====
>> I would
>> argue that lists
>> like SL4 and the transhuman movement in general is a precursor to
>> the Global
>> Brain, where risks are marginalized by interaction between minds.
>I have argued just the opposite, though -- that the Global Brain may be a
>precursor to the Singularity ... and that this particular route to the
>Singularity may be one of the safer possible paths...
>Now, this doesn't mean that the post-Singularity world won't have some
>Global-brain-ish aspects. It might well. I suspect that post-Singularity
>we will see some sort of unified mind, as opposed to the fragmentation of
>intelligence we now see on Earth. But that is very hard to say with any
>confidence, of course...
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT