From: Matt Mahoney (email@example.com)
Date: Tue Nov 09 2010 - 13:21:18 MST
Such strange ideas about AI.
Alexei Turchin wrote:
> 0) Any one should understood that AI can be global risks and the friendliness
>of the system is needed. This basic understanding should be shared by maximum
>number of AI-groups (I think this is alrready done)
As if there will be more than one. Intelligence depends on knowledge and
computing power. Groups are collectively smarter than their members. There will
be only one AI. It is already being built. It is a bigger and smarter internet,
a global brain where more and more work is automated. *That* is the context in
which you must consider friendliness.
> 1) Architecture of AI should be such that it would use rules explicitly. (I.e.
>no genetic algorithms or neural networks)
The purpose of AI is to make machines that are smarter than humans. This implies
that you can't predict or understand them, regardless of what algorithm they
use. Otherwise you just have a sub-human agent.
> 2) the AI should obey commands of its creator, and clearly understand who is
>the creator and what is the format of commands.
The internet is being created by billions of people. Besides, the goal is to
have AI do what you want, not what you tell it. That will happen if or when AI
knows what you know (a prerequisite of effective communication between human and
machine). AI will model minds, and these models will get better over time.
> 3) AI must comply with all existing CRIMINAL an CIVIL laws. These laws are the
>first attempt to create a friendly AI – in the form of state. That is an attempt
>to describe good, safe human life using a system of rules. (Or system of
>precedents). And the number of volumes of laws and their interpretation speaks
>about complexity of this problem - but it has already been solved and it is not
>a sin to use the solution.
No it won't, because nobody knows what laws already exist. It is already too
complex. How are you going to enforce the law? You will need AI just to do legal
This should also give you a clue just how hard it is to define "friendliness".
Computers demand that you specify every bit operation, and not leave ambiguity
to the whims of judges to interpret the law.
> 4) the AI should not have secrets from their creator. Moreover, he is obliged
>to inform him of all his thoughts. This avoids rebel of AI.
A global brain AI, even without secrets, would not give you enough brain power
to understand what it knows. Secrecy is not the issue. The issue is that if an
AI knows what you know and is smarter than you, then it could predict your
actions better than you could predict them yourself. You won't even know that
it controls you.
> 5) Each seldoptimizing of AI should be dosed in portions, under the control of
>the creator. And after each step mustbe run a full scan of system goals and
Please tell me how you plan to test the goals of the internet.
> 6) the AI should be tested in a virtual environment (such as Secnod Life) for
>safety and adequacy.
How do you plan to simulate 6.7 billion users?
> 7) AI projects should be registrated by centralized oversight bodies and
>receive safety certification from it.
Who does that now for the internet?
-- Matt Mahoney, firstname.lastname@example.org
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:05 MDT