From: Ben Goertzel (ben@goertzel.org)
Date: Fri Jun 04 2004 - 13:24:21 MDT
Hi,
> > Similarly, I think, a FAI should weight "free choice of sentient
> > beings" pretty high among the values it tries to optimize.
>
> And you'll pick the weighting yourself?
The weighting should be chosen according to the Golden Section ;-)
> And it'll last for the next
> billion years?
In my view, our engineering plans are only useful for guiding the launch
from the current realm to the next realm.
I don't think that specific weighting parameters chosen by us, at the
launch of the Singularity, are going to persist unto eternity.
A lot of stuff that we can't now forecast is going to happen.
> Ben, I don't think I've ever seen you try to
> think of a
> single thing that could go wrong with *your own* solutions, whatever
> criticism you apply to mine.
Of course a lot of things could go wrong with a Novamente-based AGI.
I'm basically deferring the careful analysis of what these are until
after we have an infrahuman Novamente AGI to play with. Because I think
this analysis will be conductable in a much more intelligent and useful
way at that point.
> I'll make it a challenge: Can you show me a single bit of
> self-criticism,
> a single extrapolation of error or catastrophe in your own
> plans, in any of
> your online pages? (No sudden updates, that's cheating.)
There is plenty of self-criticism in my online writings, for instance
search
http://www.goertzel.org/benzine/WakingUpFromTheEconomyOfDreams.htm
for the section beginning with
"So what did we do wrong? Frankly, all kinds of things. So many
things. It's basically impossible to do anything interesting without
making a lot of mistakes. ..."
You're not going to find many extrapolations of AI catastrophe in my
writings, because as I keep saying, I think it's too early in the
development of AGI for us to be able to make decent predictions of what
will or won't lead to a catastrophe.
-- Ben G
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT