Some new ideas on Friendly AI

From: Marc Geddes (marc_geddes@yahoo.co.nz)
Date: Wed Feb 23 2005 - 22:05:36 MST


Brilliant Ben! In fact that's exactly what I've been
trying to intuitively say all along ;) I'm pleased to
note that this seems to be at least partially based on
conversations that took place on SL4.

Any way, what you wrote is almost exactly what I
*would* have written if I had your knowledge and IQ.
You've 'extrapolated my volition' ;)

I feel we're actually starting to close in on the
outlines of the real theory of morality.

There is just one radical modification I want to
suggest.... ;)

You say that ITSSIM doesn't suggest any content for
the goal system...it doesn't tell us what the AI
should do. I wonder. What I want to suggest that the
ITSSIM procedure will only work for *some* initial
goals. So I think that there is a class of initial
goals that will be stable (can meet the ITSSIM
requirements) and a class of initial goals that won't.
 And guess what the magic class of valid initial goals
is...Suppose it's precisely the one's that represent
Friendly goals?

In that case, working out ITSSIM would completely
solve the problem of initial content and visa versa.
Not just any old initial input will work for the
ITSSIM function. There will be some particular class
of goals that serve as valid input data for the ITSSIM
function and I conjecture that the 'valid input' for
which the ITSSIM function provides a sensible answer
is precisely Friendly goals only.

I'm saying that Content constrains Structure and
Structure constrains Content. To understand a little
better what I'm getting at, please read my other post
('Does Friendliness Structure constrain Content and
visa versa')

We're closing man. No doubt about it. I've been
having a torrent of ideas recently and at last a real
theory is starting to emerge from my initial
gibberish!

The 'top-down strategy' for my own little AGI project
is nearly complete. For a very brief outline of some
my key ideas, I refer you again to my essay 'Towards a
Science Of Morality' (now slightly revised):

http://www.imminst.org/forum/index.php?act=ST&f=3&t=5257&s

and also my final skeleton outline for my 8-level
model of intelligence (of course this is only a vague
intuitive starting point, not a real design):

http://www.sl4.org/wiki/TheWay

The time for bullshit is over. Time to get real. No
more philosophy, no more gibberish, no more fuzzy
thinking. Just hard logic and maths.

In short, it's time for me to hit the books at last...

P.S The sooner Yudkowsky and Wilson realize I'm right
about everything, the better for everyone ;)

 

=====

Find local movie times and trailers on Yahoo! Movies.
http://au.movies.yahoo.com



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:50 MDT