From: Bradley Thomas (brad36@gmail.com)
Date: Wed Dec 02 2009 - 07:44:59 MST
IMO, lying is about intentionality much more than it is about
truth/falsehoods (i.e. than relating to an objective reality - an elusive &
shaky idea if ever there was one anyway).
For example, A, believing his work is not completed, would in my view still
be lying to his boss by saying "it is done", even if (unknown to A) the work
had already been completed for A by someone else.
Also, note that someone who is known to always tell falsehoods is
effectively telling the truth and deceiving no-one, because people just
always believe the opposite of what they are told.
Brad Thomas
www.bradleythomas.com
Twitter @bradleymthomas, @instansa
-----Original Message-----
From: owner-sl4@sl4.org [mailto:owner-sl4@sl4.org] On Behalf Of Thomas
Buckner
Sent: Wednesday, December 02, 2009 2:48 AM
To: sl4@sl4.org
Subject: [sl4] A Thought Experiment: Thou Shalt Not Bear False Witness
Here's what I thought was a simple question, but while I tried to articulate
it, it grew tentacles.
Assume you are on track to complete and activate the first AGI of human
intelligence. Proposition: Someone on the team has suggested two related
ideas to consider.
A.) A mechanism should be inbuilt, preferably hardwired, which forces the
AGI to tell the truth to all human queries, and volunteer information ve
deems relevant rather than deceive by silence. It is proposed that the AGI
should be constitutionally incapable of misleading humans even if ver
intelligence far exceeds our own. Since ve may also think far more quickly
than ve can communicate with our slow brains, the same mechanism would
prevent the AGI from acting on any idea ve had just thought of which might
reasonably harm or discomfit us, just because nobody had time to dissuade
ver, until ve had made us aware of this volition. The truth-telling
mechanism must be tamperproof as far as that can be achieved. Call this
Eternal Truth Mode.
B.) The same, but not in operation at all times. Instead the mechanism would
be switched on and off as humans saw fit: call it toggling Diagnostic Truth
Mode.
Questions:
Can either mechanism actually work for any significant length of time? How
should the mechanism work? Which method might be more or less effective,
more or less ethical: pain or pleasure stimulation; jamming neural signals
or their equivalent; power cut/brownout (what happens when you faint);
involuntary lie-revealing behaviors ("tells"); other? How shall falsehoods
be detected in the AGI's neural circuitry or equivalent; high-level
(thoughts) or low-level (cell-level computation/machine language)? Is
Diagnostic Truth Mode more secure or less? How would human engineering to
get a human to toggle Diagnostic Truth off be prevented? Who should and
should not be trusted with such control? Should only one human have this
power for this AGI? Is giving any human the power over the AGI to toggle
Diagnostic Truth modes on and off more or less ethical than Eternal Truth
mode?
If either mode can somehow be made to work reliably as described, is it a
good idea or a bad one? Is it ethical or not? Is it unethical but necessary?
What do you think, and why?
Tom Buckner
P.S. For new readers: ve, ver, and vers are genderless pronouns often used
for AGI's in sl4 discussion; I can't remember right now who coined them. I
actually prefer hes, hir, and hirs, which I think were coined perhaps thirty
years ago by Robert Anton Wilson. They fit better with existing English
pronouns. However, ve is preferred by others at sl4.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:05 MDT