From: sam kayley (firstname.lastname@example.org)
Date: Tue Feb 22 2005 - 15:23:20 MST
----- Original Message -----
From: "Michael Wilson" <email@example.com>
> furthermore less effective than simply developing uploading
> technology. As for uploading, FAI is a better idea; we don't know
> if humans can safely get over the initial self-modification hump
> without a personal superintelligent transition guide. We can design
> an AGI for rationality and stability under reflection, wheras
> humans and humanlike cognitive systems don't have these features.
> > We should experiment with human augmentation to get a better
> > idea of how being smarter affects consciousness, preferably
> > expanding the mind of an adult so they can describe the transition
> That would be nice, despite the risks of having enhanced humans
> running around, but we don't have the time. The tech is decades
> away and people are trying to build Unfriendly seed AIs right now.
> I'm not saying we shouldn't try and enhance humans; we should, it's
> just that FAI can't wait for better researchers.
Would designing a goal system for uploading a bunch of people (and letting
them monitor each other for the desire to take over the world) be simpler
This would then allow the next theory of morality to be developed at a huge
speedup rate, with as many Eliezers as desired :)
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:50 MDT