From: Brian Atkins (firstname.lastname@example.org)
Date: Sat Jun 22 2002 - 11:38:15 MDT
Eugen Leitl wrote:
> We'll just have to agree that our reality model is different. I'd wish
> we'd have validated cryonics, there's a considerable unknown lurking in
> there about radical life extension approaches available to us today.
See Eugen this is one of the major complaints people around these parts
probably have regarding your ideas. They are based on things you wish for,
but don't seem to really exist or work in reality. I know you WISH we
had working cryonics, perfect anti-aging and disease prevention tech, and
everyone had their own mini space colony, but none of this seems likely
to happen any time soon.
Meanwhile, rather than admit that there just might be a /possibility/ of
fixing all this via an AI technology that can be built and tested in
such a way as to be likely less risky than letting human uploads run
wild, you aren't interested in even seriously investigating.
We do have differing reality models, and yours seems based on an utter
surety that AI must go evil or uncaring (or at least we can't tell what
will happen). Perhaps this is why you never quite find the time to read
CFAI. We've all certainly spent plenty of time trying to fully understand
your reality model, but I'm not seeing that flexibility on your side.
-- Brian Atkins Singularity Institute for Artificial Intelligence http://www.intelligence.org/
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT