From: Brian Atkins (email@example.com)
Date: Sat Mar 24 2001 - 10:33:08 MST
"Christian L." wrote:
> OK, you had a definition for "ending all evil". My mistake. I had scooped
> around Eleziers website without finding anything.
> >To eliminate all INVOLUNTARY pain, death, coercion, and stupidity from
> >the Universe.
> >Any problems?
> Yes, the problems remain. While "death" can be clearly defined, "pain" and
> "coercion" cannot. Have you got separate definitions for these too? Is it
You can go look them up in the dictionary if you aren't sure. But seriously,
these definitions will only get more well defined as science progresses. The
sysop will determine the exact boundaries, individually tailored to each
sentient's own mind and wishes.
> All interaction between humans includes various degrees of coercion; from
> suggestion to persuation to brute force. Where do you draw the line?
I can't say today exactly where that line will be, but it _will_ be drawn
by the sysop in a sysop scenario. It probably is around brute force unless
there is too large a separation in intelligence between the particular two
minds interacting, in which case it might be as far down as suggestion.
> My original question was meant as a retorical one. The point is, with a
> subjective definition of "evil", there cannot be a uniform set of rules that
> will "end all evil" as defined here.
Well in some sense what this all is about is making a science out of it-
the sysop will have to determine where the boundaries lie when it comes
to involuntary experiences and conditions.
> I fail to see the need for discussing concepts like good, evil, morality or
> ethics at all, or how a Power/SI would relate to them. Ethics seem to be
> little more than rules set up by humans in order to maintain a fairly stable
> society. I don't see how that can have any meaning in the post-Singularity
> world or even in the last years leading up to the Singularity. I admit that
I find that a bit hard to believe. Do you think everyone will suddenly just
become super-nice? Or are you saying that things will become so different
that "rules" cannot apply?
> I haven't read Eleziers "friendly AI"-paper (is it out yet?), but right now,
An interim version was released to this list a month ago:
> I can see no way to determine how an AI would react towards humans. If
> anything, the most logical thing would probably be either extermination, or
> forced uploading (Best use of material resources). But personally, I would
> rather avoid speculation about post-singularity issues completely.
Hiding your head in the sand is probably not a good idea, at least if you
want to have a say in what comes later.
> >>The scary thing about that is, who gets to define what constitutes "evil?"
> >Someone has to do it.
> No, noone has to do it. All we have to do is build a seed AI (unless we are
> talking about Asimov Laws...).
The sysop will do it IMO, and it will be a very good thing.
-- Brian Atkins Director, Singularity Institute for Artificial Intelligence http://www.singinst.org/
This archive was generated by hypermail 2.1.5 : Wed May 22 2013 - 04:00:20 MDT