Re: [sl4] A model of RSI

From: Eric Burton (brilanon@gmail.com)
Date: Fri Sep 26 2008 - 19:47:46 MDT


>I have a hard time picturing what disaster caused by a failure to achieve AI would be
>worse (in the context of human ethics) than a gray goo accident killing all DNA based
>life. Could you elaborate?

If there were a civilization on Earth capable of producing
self-reproducing nanomachines so efficient as to produce a risk to
life and limb, presumably those same people could make a
counter-agent, or have some other means of containment at hand. When a
technology is refined, it's easy to control. I think we usually
imagine disasters that could only arise in the far future as if they
were to occur in the world of today, where they would present
proportionately more menace. We can speculate what civilizations
supercharged by future technology could bring to bear on these
problems, but our speculation is necessarily limited by today's scope.
I suspect these risks will get smaller as we approach them.

On 9/26/08, Matt Mahoney <matmahoney@yahoo.com> wrote:
> --- On Fri, 9/26/08, Eric Burton <brilanon@gmail.com> wrote:
>
>> I'd like to clarify. I'm not saying we shouldn't be concerned by
>> planet-level extinction events. But one thing we know about biological
>> life is that once it comes into existence it persists in some form for
>> at least billions of years and, as presently in evidence,
>> indefinitely.
>
> It has happened at least once, so far.
>
>> Perhaps technological progress is a similarly indomitable process,
>> once bootstrapped. Then we could worry about it superseding or
>> otherwise trampling us. But this doesn't seem like a more immediate
>> risk to civilization than the crises we're exposed to by scientific
>> backwardness. Machines are a long way from competing for space and
>> resources with us, and that situation is generally improving. I'd
>> argue that outbreaks of grey goo are a sign of a civilization that has
>> solved a great deal of its other problems!
>
> I have a hard time picturing what disaster caused by a failure to achieve AI
> would be worse (in the context of human ethics) than a gray goo accident
> killing all DNA based life. Could you elaborate?
>
> -- Matt Mahoney, matmahoney@yahoo.com
>
>



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT