Re: Fighting UFAI

From: Marc Geddes (
Date: Wed Jul 13 2005 - 23:18:56 MDT


How many times do I have to tell you all...

there... ...


I swear I'll prove it if it's the last damn thing I
ever do

yeah, I admit my opinion is still (mostly) intuition,
but that doesn't mean it's wrong. The idea that
there's no objective morality is educated guess-work
as well you know. Why should my guess by any more
likely to be wrong than eli's guess?

O.K, so my attempts to intellectually mix it with
wilson and yudkowsky have been pretty pathetic so far
I must admit - as humorous as wile coyote trying to
beat road runner. but I still convinced it's only a
matter of time before I intellectually have them both
on the ropes ;)

I repeat my guess again:

*Computational intractability is what will always stop
an ufai from endless self-improvement. I think any
ufai can only improve to a point before being *jammed*
by intractability. So yes, I think that unfriendly ai
is possible, but only of a kind that is limited in

Objective morality does not constrain the *content* of
the goal system, but suppose it constrains the
*structure* ? (the process of acting upon the goal
system). What if objective morality will always *jam*
an unfriendly goal system by hitting it with
computational intractability?

So, an unfriendly ai cannot recursively self-improve
past a certain point. Only a friendly ai can. That's
my story and I'm sticking to it.

Do I have any hard evidence to support my assertions?
Not yet. It's still (mostly) intuition. Should I
assume I'm wrong? Of course! It is only rational to
assume the gun is loaded (I'm not really crazy you
know, I only act that way ;)

THE BRAIN is wider than the sky,  
  For, put them side by side,  
The one the other will include  
  With ease, and you beside. 
-Emily Dickinson
'The brain is wider than the sky'
Please visit my web-site:
Mathematics, Mind and Matter
Send instant messages to your online friends 

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT