From: Michael Roy Ames (firstname.lastname@example.org)
Date: Fri Aug 23 2002 - 19:17:16 MDT
Alden Streeter <email@example.com> wrote:
> So then the same question can be asked of a Sysop-level AI - instead of
> working to help humans to acheive their petty, primitive, evolutionarily
> determined goals, why not just use its power to change the humans so they
> have different goals?
Why not? Well, I for one, want to be empowered... not overpowered. I would
certainly listen to advice from a Super Intelligence (SI), and would
probably decide to take it ;) but I would definitely not want to be cut out
of the decision loop. So, *that's* why not.
Also, commenting on the "petty, primitive, evolutionarily determined goals"
phrase... for any given being, except one, there will always be some other
beings more advanced and more intelligent than ver. This applies to SI's
too. Therefore, the question boils down to: who decides what levels of
intelligence gets to decide? Answer: the highest intelligence on the ladder
who gives a damn about those beneath ver. Friendly AI is about making sure
the AI 'gives a damn' and, to the maximum possible extent, assists us in a
manner we would consider friendly - even at our lower level of intelligence.
> Shouldn't it, with its vastly superior intelligence,
> be able to think up better goals for the humans to have than the humans
> thought of for themselves?
Yes. But every intelligent being is going to define 'better' a different
way... and there's the rub.
> And why should humans not want the AI to have
> this type of power? - if the AI changed their goals for them, they would
> course immediately realize that their new goals were the right goals all
In a word: autonomy. Another word: freedom. Most humans don't want these
things _taken_ from them, even if the Being taking them is much greater than
they are. However, it is also true that most humans would willingly
_give_up_ some of these very same treasures, if convinced they will benefit
in other ways. Way: Security. Way: Community. Way: Power.
> if I am covering old ground just let me know.
You are definitely covering old ground, this reply has barely scratched the
Suggestion: Read through the archives. They contain many excellent
discussions, and you will understand why I put the smiley face on the end of
the last sentence. Afterwards, blow holes in the Friendly AI idea... if you
can... no, really - please try.
Michael Roy Ames
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT