From: micah glasser (micahglasser@gmail.com)
Date: Tue Dec 13 2005 - 11:00:20 MST
Intelligence cannot help you ypu select for the good. The Good must be
programmed into the AI. Once the AI knows what the Good is then its
intelligence will surpass any human intelligence in figuring out how to
obtain bringing about the Good. If the Good is failed to be programmed into
the machine as its super-goal then it wil certainly be malevolent. Super
intelligence is not a god. Its merely a tool.
On 12/13/05, David Picon Alvarez <eleuteri@myrealbox.com> wrote:
>
> I don't have time to reply to this in detail, but:
>
> From: "Phillip Huggan" <cdnprodigy@yahoo.com>
> > I notice an analogy between AGI ideas and the evolution of government.
> We have democracy, this resembles CV. However, Collective Volition can be
> improved upon. Eliezer's CV essay remarks that no one should be at the
> mercy of another's arbitrary beliefs. If you make people more like they'd
> like to be, I think you are magnifying the bad in people too. Regardless,
> freedom and free- the will are really at our core. Having an AGI enforce
> a
> simple Charter of Rights and Freedoms would ensure none of us are impinged
> upon, instead of damning the minority. The CV essay states that no one is
> in a wise enough position to make normative judgements about such things,
> but this is simply not true. There are plenty of people employed in
> social
> sciences who don't do much of value. But some of their products include
> very well thought out documents. One of the few books I've kept with me
> through my moves is titled "The Human Rights Reader". Also "The Canada
> Charter of Rights and Freedom!
> > s"
> > http://laws.justice.gc.ca/en/charter/ is being used as a model in many
> developing nations. Obviously this is not an optimal goal system, but I
> think it is an improvement to CV. I don't know how difficult it would be
> to
> program an AGI to implement such a charter while still preserving or
> effecting/accelerating the many types of progress we seem to have open to
> us
> in the absence of AGI. Earth is bountiful enough that there aren't any
> tough ethical zero-sum dillemnas where an AGI actually would have to take
> essential-for-charter physical resources from one judged inferior person
> and
> give to another judged superior person, at least until just before the end
> of the universe.
>
>
> 1.If such a charter of rights is what people would want if they were
> smarter
> and friendlier, CEV would select for it, so CEV includes as a possible
> outcome such a charter of rights.
> 2. A superhuman AI or RPOP or call it what you will is going to be able to
> work things out to a much more significant level of refinement than any
> human political philosopher, and imposing the limitations of human
> political
> philosophers would be akin to enforcing speed limits designed for cars to
> spaceships. If you're so sure the charter is perfect, or near perfect,
> then
> there's no problem, CEV will find that's the right mode of organizing
> things. But I doubt it is. Note that Eliezer says "arbitrary beliefs",
> that
> arbitrary has meaning there.
>
>
> --David.
>
>
-- I swear upon the alter of God, eternal hostility to every form of tyranny over the mind of man. - Thomas Jefferson
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:54 MDT