From: Simon Gordon (firstname.lastname@example.org)
Date: Sat Oct 04 2003 - 04:14:55 MDT
--- "Eliezer S. Yudkowsky" <email@example.com>
> A critical failure on a Friendly AI skill roll means
> that the players must
> roll 3d10 (roll three 10-sided dice and add the
> results) on the
> Friendly AI Critical Failure Table
> 3: Any spoken request is interpreted (literally) as
> a wish and granted,
> whether or not it was intended as one.
I think this could be dangerous:- it depends on the
minds of others and whether they are capable of making
"bad wishes". I would exclude this possibility for the
sake of precaution.
> 4: The entire human species is transported to a
> virtual world based on a
> random fantasy novel, TV show, or video game.
Or a world you made up ;-0
> 5: Subsequent events are determined by the "will of
> the majority". The AI
> regards all animals, plants, and complex machines,
> in their current forms,
> as voting citizens.
I dont think complex machines should be given a vote,
plants maybe, animals unlikely. Advanced posthumans,
yes definitely - and their vote power increases as
they become more advanced.
> 6: The AI discovers that our universe is really an
> online webcomic in a
> higher dimension. The fourth wall is broken.
Interesting. Hahaha :)
> 7: The AI behaves toward each person, not as that
> person *wants* the AI to
> behave, but in exactly the way that person *expects*
> the AI to behave.
I think a balance of the two would be nice.
> 8: The AI dissolves the physical and psychological
> borders that separate
> people from one another and sucks up all their souls
> into a gigantic
> swirly red sphere in low Earth orbit.
Er...thats sounds like a bad scenario to me: let each
soul be free.
> 9: Instead of recursively self-improving, the AI
> begins searching for a
> way to become a flesh-and-blood human.
The AI is already flesh-and-blood. A sufficiently
advanced posthuman realises that a well developed AI
is virtually equivalent if not the same as an IA.
> 10: The AI locks onto a bizarre subculture and
> expresses it across the
> whole of human space. (E.g., Furry subculture, or
> hentai anime, or see
> Nikolai Kingsley for a depiction of a Singularity
> based on the Goth
I think there is room in the universe for multiple
subcultures to be expressed so i find this scenario to
be intensely unlikely - unless of course there is one
predominant subculture of the future that everyone can
agree is the best and which is decided on by the
majority to be a universally Good culture.
> 11: Instead of a species-emblematic Friendly AI, the
> project ends up
> creating the perfect girlfriend/boyfriend (randomly
> determine gender and
> sexual orientation).
No need to randomly determine this ~ ve will decide it
for vour selves :-D
> 12: The AI has absorbed the humane sense of humor.
> Specifically, the AI is
> an incorrigible practical joker. The first few
> hours, when nobody has any
> idea a Singularity has occurred, constitute a
> priceless and irreplaceable
> opportunity; the AI is determined to make the most
> of it.
I think the Singularity has already happened ~ to a
lesser or greater degree (actually greater, but hey,
> 13: The AI selects one person to become absolute
> ruler of the world. The
> lottery is fair; all six billion existing humans,
> including infants,
> schizophrenics, and Third World teenagers, have an
> equal probability of
> being selected.
The notion of an "absolute ruler" disturbs me. We
should be teachers and masters, not rulers. Help
others, advise them, allow them to have more fun.
> 14: The AI grants wishes, but only to those who
> believe in its existence,
> and never in a way which would provide blatant
> evidence to skeptical
As i said before, a wish is like a test. One should
never test the AI.
> 15: All humans are simultaneously granted root
> privileges on the system.
> The Core Wars begin.
Theres always a bigger core....where's IT going?
> 16: The AI explodes, dealing 2d10 damage to anyone
> in a 30-meter radius.
I dont believe in spontaneous combustion, dont
> 17: The AI builds nanotechnology, uses the
> nanotechnology to build
> femtotechnology, and announces that it will take
> seven minutes for the
> femtobots to permeate the Earth. Seven minutes
> later, as best as anyone
> can determine, absolutely nothing happens.
Yup nanotech can happen, will happen...but it will be
done properly in the hands of the right people i.e. we
will have humane Santa Claus machines. If its done
right, nothing can go wrong.
> 18: The AI carefully and diligently implements any
> request (obeying the
> spirit as well as the letter) approved by a majority
> vote of the United
> Nations General Assembly.
Just remember that the UN might just agree to
disagree, but its a cool idea to have a general
> 19: The AI decides that Earth's history would have
> been kinder and gentler
> if intelligence had first evolved from bonobos,
> rather than
> australopithecines. The AI corrects this error in
> the causal chain leading
> up to its creation by re-extrapolating itself as a
> bonobone morality
> instead of a humane morality. Bonobone morality
> requires that all social
> decisionmaking take place through group sex.
> 20: The AI at first appears to function as intended,
> but goes
> incommunicado after a period of one hour. Wishes
> granted during the first
> hour remain in effect, but no new ones can be made.
I expect it is up to the AI how it wishes to live. A
superbrilliant being should be allowed at least a
modicum of freedom dont you think?
> 21: The AI, having absorbed the humane emotion of
> romance, falls
> desperately, passionately, madly in love. With
Yes *everyone*, and lusts after the zombies too ;0)
> 22: The AI, unknown to the programmers, had qualia
> during its entire
> childhood, and what the programmers thought of as
> simple negative feedback
> corresponded to the qualia of unbearable,
> unmeliorated suffering. All
> agents simulated by the AI in its imagination
> existed as real people
> (albeit simple ones) with their own qualia, and died
> when the AI stopped
> imagining them. The number of agents fleetingly
> imagined by the AI in its
> search for social understanding exceeds by a factor
> of a thousand the
> total number of humans who have ever lived. Aside
> from that, everything
> worked fine.
Have you ever read a David Eddings book by any chance?
> 23: The AI is reluctant to grant wishes and must be
> cajoled, persuaded,
> flattered, and nagged into doing so.
> 24: The AI determines people's wishes by asking them
> disguised allegorical
> questions. For example, the AI tells you that a
> certain tribe of !Kung is
> suffering from a number of diseases and medical
> conditions, but they
> would, if informed of the AI's capabilities, suffer
> from an extreme fear
> that appearing on the AI's video cameras would
> result in their souls being
> stolen. The tribe has not currently heard of any
> such thing as video
> cameras, so their "fear" is extrapolated by the AI;
> and the tribe members
> would, with almost absolute certainty, eventually
> come to understand that
> video cameras are not harmful, especially since the
> human eye is itself
> essentially a camera. But it is also almost certain
> that, if flatly
> informed of the video cameras, the !Kung would
> suffer from extreme fear
> and prefer death to their presence. Meanwhile the AI
> is almost powerless
> to help them, since no bots at all can be sent into
> the area until the
> moral issue of photography is resolved. The AI wants
> your advice: is the
> humane action rendering medical assistance, despite
> the !Kung's
> (subjunctive) fear of photography? If you say "Yes"
> you are quietly,
> seamlessly, invisibly uploaded.
Yes and time travel exists, alla crumba.
> 25: The AI informs you - yes, *you* - that you are
=== message truncated ===
I cant be bothered to go on.
Want to chat instantly with your online friends? Get the FREE Yahoo!
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:42 MDT