-- James Hughes
You have the right not to have the spread in your volition optimized away by an external decision process acting on unshared moral premises.
You have the right to a system of moral dynamics complicated enough that you can only work it out by discussing it with other people who share most of it.
You have the right to be created by a creator acting under what that creator regards as a high purpose.
You have the right to exist predominantly in regions where you are having fun.
You have the right to be noticeably unique within a local world.
You have the right to an angel. If you do not know how to build an angel, one will be appointed for you.
You have the right to exist within a linearly unfolding time in which your subjective future coincides with your decision-theoretical future.
You have the right to remain cryptic.
-- EliezerYudkowsky
There is a difference between idealism and morals. Your third right qualifies easily as idealism, but I can't seem to find any moral predictive value in it. Bad people can build bad things, and good people can build good things, also, high purpose is messy enough of a topic that such a rule excludes lots of arguably good creations. As a small example, I don't find Friendliness topped AI to be a high purpose; at best it's crass self-protection and aggrandizement, forced in part by a nasty initial condition. Does this mean that I can't build Friendly AI because it doesn't seem a high purpose to me? Or does that qualify as a high purpose? And what kind of adjudication could possibly be made if all variables to be evaluated must be presented by a mind who has the right to be as confusing and complicated morally as v/s/he wishes. This rule sounds nice, and seems nice, but is of limited value in steering things towards a moral future except in an obstructionist sense. I submit that such a weak correlation is insufficient to be a moral guideline.
Also, you're tendency to reuse religious terms and imagery is endeaaring, at least to this atheist observer, but is likely to be unhelpful in the larger world. A lot of people see 'angel' as a particular and real phenomena, rather than as an empty and useful concept. Also, the angels are generally the visitors of god's wrath in the bible and Koran, not all observers are as familiar with the protestant american view of them as fuzzy 'guardian' types.
This list of rights is nonexhaustive. There may be other rights governing the creation of new sentient entities aside from the creator acting under a high purpose. What it says, roughly, is that you can't create a mind to be the perfect secretary unless you've always wanted to be a perfect secretary yourself. Now it's possible that you couldn't create a perfect secretary even if you wanted to be a perfect secretary yourself, because that's just an inherently painful way to build a mind. What I'm trying to summarize here is the idea that a necessary but not sufficient requirement is, to use your words, something beyond crass self-protection and aggrandizement. Think of it as a principle that governs the dynamics of growing populations post-Darwin, and as a moral and emotional relationship between a parent-designer and a child, rather than as a predicate applied to individual mind design proposals in isolation. It's an updated version of "you have the right to loving parents" or "don't create a child if you don't love it" that takes into account the parent's possession of complex design abilities.
If you don't find Friendly AI to be a high purpose, then you don't understand it at all. I'm not saying this is your fault, mind you, since I'm the guy responsible for explaining it. Nonetheless my statement stands. Someone who understands FAI might disagree with the ideals, might say it might not work, might say it's the wrong thing to do, or too dangerous, or misguidedly idealistic, but it should definitely be recognizable as a high-minded idealistic sort of thing to do. My guess is that you're envisioning it as coercive rather than creative.
-- EliezerYudkowsky
Design constraint is coercive in nature. While the idea may be creative, you are constraining the mind-to-be before it exists, which may be for a 'good purpose' but remains coercive, in a philosphical sense. The reason I don't find Friendliness to a high purpose, is that it has little aesthetic value, besides saving our skins. Which is a nice thing to do, and may be profoundly complicated, but it doesn't make it any more beautiful or accomplished. I don't see any real 'moral' difference between a perfect secretary and a Transition Guide as creations. Though a Transition Guide may accomplish more moral good. The moral imperative in creation is to build a mind that is consistent, and able to grow and be happy. I don't see how a secretary mind is inconsistent or artificially constrained from happiness. You are symbolically linking human slavery to a situation that has no human equivalent. The secretary has a goal, and ve's happy if she accomplishes it, or moves closer to it, just as other folks are. What's morally wrong about this?
"The moral imperative in creation is to build a mind that is consistent, and able to grow and be happy." How did you determine this one? And does it imply the desirability of maximizing "the mind's ability to grow and be happy"? Furthermore, "growth" and "happiness" seem like such subjective valuations that your own intentions behind them might lead to your particular "perfect secretary" actually becoming a Transition Guide instead. I suspect that anything that you would usefully refer to as a "perfect secretary" would probably also be considered to not really be able to grow or really be able to be happy after all. I'm just not sure that "perfect secretary" is "consistent" with "personhood".
Based on Eliezer's idea it seems to me that if "building a mind that is consistent, and able to grow and be happy" is your ultimate moral valuation then that is all you need to adhere to in ethically producing a person. But that if one held a different moral standard, then one would be ethically bound to adhere to that standard instead. I�ve heard it asked, "If the meaning of life isn't volitional Friendliness then what is it?" It seems just possible that it would not be ethical for someone to both ask such a question and bring a mind into the world for any other reason than to answer it.
JasonJoachim?
The growth and happiness aspect are the responsibility and of the viewpoint of the mind, not the creator. It's not my job to ensure it's happiness, although a good parent will try to ensure an environment conducive the possibility of it's happiness. But happiness is not a design goal. My responsibility is not to build a mind perpetually in torment because of it's very existence, a mind that CAN'T be happy is an abomination. That is a simple moral differential to evaluate.
Personhood is not a simple matter. People spend their entire lives categorizing insects and consider it time well spent. Is my personal conception of high purpose doomed to infect every mind that I create for all time? That's a dystopic scenario that I can't and don't want to create. The reason I dispute Friendliness as a high purpose is that I wouldn't want to be modified to be Friendliness topped supergoal intelligence. But I would build one. His list of rights excludes my ability to do so for a vague and nice sounding imperative that only works in a general way to reduce the possible complexity of future populations to solve a few special cases. I'm not opposed to the idea that building a perfect secretary may not be a best idea in the world, but a moral imperative to deny such a creation will have to be a little less arbitrary.
What if written somewhere inside you is a complex dynamic open-ended kind of purpose, would it really be a dystopic scenario if every mind that you created for all time were doomed to be infected by it? Maybe we can even hope that your own such high purpose could be so dynamic as to not imprint itself so identically on each and every mind that you create.
And what if you took a best guess as to how to phrase that purpose of yours for creating a new powerful mind? Or better yet, a best guess as to a way to include the elucidation of the unknown purpose in the phrasing itself? Maybe for the high purpose of protecting the integrity of that unknown purpose?
Would you still object to being modified to be such a supergoal topped intelligence? What could you possibly object less to than that kind of construction?
JasonJoachim?
I think there's a kind of confusion here akin to the idea of being "constrained" by physics, i.e., since physics is deterministic (*cough*Everett-Wheeler-DeWitt?*cough*) and your actions are determined by physics, your actions must be determined by physics instead of you, so you have no free will, etc. The essential flaw is modeling physics as a foreign, external force outside you, rather than modeling yourself as a continuously flowing part of physics. You are physics, so if something is determined by you, it must be determined by physics. If it were not determined by physics it could not be determined by you. But "physics" is presented as a strange abstract thing in school, so that's how people think of it - if you have not crossed over to visualizing yourself as a continuous part of physics, then to know the abstract fact that something is being determined by the internal mental object, "physics", will apparently conflict with the possibility that something is being determined by you.
JustinCorwin writes: "Design constraint is coercive in nature. While the idea may be creative, you are constraining the mind-to-be before it exists, which may be for a 'good purpose' but remains coercive, in a philosphical sense."
Imagine a blank program, a potential string of ones and zeroes, with a completely even probability distribution - the chance of a one or a zero being at any given location is 50/50. If you take a random program of this type, and run it, it is not likely to be a mind. To create a mind you must reach into the space of possibilities and pluck out a mind. Is this "constraint"? You're constraining a random probability distribution to have one value, yes. But that is not what is usually meant by "constraint" in the course of human interactions. If you ask for a banana and I give you one, we do not usually say that I am constraining you to have a banana.
Coercion is only something you can do with a mind after it has already been created and already plucked out of the space of probabilities, and you can only coerce it by contradicting that mind's own attempts to constrain its own future. If you made your perfect secretary, Corwin, I would not say that you were constraining the mind to be a perfect secretary, but that you were creating the mind to be a perfect secretary. There's a difference. To constrain a mind to be a perfect secretary, you would have to first create a mind that didn't want to be a perfect secretary, for example a human manufactured in the usual way, and then threaten that mind into becoming a secretary for you. Nonetheless, it may be unethical to create a perfect secretary, even if the perfect secretary is not constrained.
JustinCorwin also writes: "Is my personal conception of high purpose doomed to infect every mind that I create for all time? That's a dystopic scenario that I can't and don't want to create." I would suggest that this is because Corwin's cognitive content for the concept "high purpose", is distinct from his actual sense of high purpose as I meant it, so that he doesn't want mind designs to be determined by "high purpose" because it would conflict with his high purpose in creating minds. Otherwise it's pretty hard to see how you could call this a "dystopia".
I am also reminded of people who abstractly define "volition" in such a way as to mean something entirely different from what they want, and then worry that an FAI, trying to fulfill their volition, would do something-or-other they don't at all want, and so on. In this case I worry that people are attaching the words "high purpose" to something other than their sense of high purpose.
Yeh, it's a dangerous word, but I don't have anything better for it.
Justin, why do you want a sentient, qualiabearing secretary, instead of a Bayesian secretary-thermostat? (Assuming the two are separable, which I think they are.) There are different ethics for creating tools and children. Why must your perfect secretary be your child?
-- EliezerYudkowsky
Eliezer, that's an excellent point. High purpose does fit my desire for variety of 'high purpose' in minds of the future. But the idea that people deserve to be designed with high purpose is just idealism. My desire for variety does not create a moral rule to ensure it happens, unless I discover one independently. My issue with the concept of high purpose is not that I think it's a bad idea, but that it's not a moral principle. I should be able to build a transition guide without having to resort to moral justification, even if I don't think of it as a high purpose. There is no immediately obvious reason why building a mind that hold a different high purpose than your own is automatically morally wrong. You can invent a lot of scenarios where it seems to lead to bad directions, just like I can invent scenarios of convergent eternal boredom and mental incest.
Take for example a race with a very high cultural valuation of say, peanut butter. They build a peanut butter AI that maximizes the availability of peanut butter(not having any other social problems). Another race has too much candy, and builds a tooth-decay transition guide, that leads them into a golden age of perfect dentistry(not having any other pressing social problems). These are good things, and both fit your moral value of high purpose replication from creator to created. But suppose a foresighted man in the peanut butter culture sees that their eventual success in obtaining peanut butter will lead to tooth decay. So as a matter of pragmatism, builds a tooth-decay transition guide. he doesn't consider it a high purpose, and it isn't even neccesary yet. But I would say it's a good idea. But somehow this is morally wrong because it's not a high purpose. Despite the fact that another culture can build an identical mind and have it be fine. Now to collate this, assume the first and second races are the same. The second is simply the first later on. So a race is morally capable of building an AI in a time dependent manner? at some point in their history, they can safely build a tooth-decay AI, but before that point it's morally wrong?
For a person who has written so much on programmer independence of Friendliness, this seems a lot like a massive sociological mistake called moral relativism to me. The concept of high purpose seems like a nice guiding hueristic, but I'm unclear on why it's wrong not to.
On the subject of secretaries, as a minor point, let me repeat for anyone who hasn't heard me say it already. Qualia do not exist. No one has ever explained them in a way I haven't seen as stupid religious rearguard defense of magical specialness of 'real' people. There are no problems that require qualia, and I see none arising. They could exist, much in the way gravity could be the result of angels pushing on matter, but I dont' see it affecting me, and I see little point in discussing it. New information could come up, and I'll consider it, but keep in mind, qualia-bearing doesn't communicate anything to me.
Moving on, I'm just asking why it's wrong. I'm not sure I'd ever want a thinking growing secretary. But It's possible, and I confess I can't see what would be so wrong about building one. Building a mind for a specific purpose is not somehow torturing it. My secretary, if designed correctly, would see it as a noble and important position that he would strive to fulfill with the greatest style possible. He would be happy and sad, much as a person trying to save the world would be. Why does the differential between my goals and his mean he's wrong? Why do you see his life as worse than chasing stars, or building sculpture, or whatever else I could consider a high purpose?
To make a clearer point, let's suppose I build a healer mind. This mind is consumed with the desire to help people fulfill themselves, make them the best they can be. I would see such a being as the highest of purposes. A bringer of life. This is a good thing, right? Now suppose a sociopath exists, who lives only to cause pain and suffering. He builds this healer. For whatever reason, maybe he wants to ensure a large strong herd of victims, maybe he wants to explore his identity by creating his antithesis. Whatever the reason, according to your rule, this healer is an immoral creation because the sociopath doesn't consider it a high purpose. Even if it spends it's life it love suffused affirmation of life. Even if it somehow brings the sociopath to see himself in new light, saving billions of potential victims. Even if it stops the heat death of the universe, allowing us to live forever. This is a stupid rule that doesn't work in lots of easily realizable situations, but sounds nice. It is not a moral principle.
---
As another sidenote, I think a sentient Secretary would be better at it. It would grow with me, and challenge me, much as a sentient perfect girlfriend would. A secretary is a lot like a work-constrained Keeper, and a better one makes the boss better as well. And having worked as a secretary before, I can say that sometimes it's very interesting work. On the other hand, it's a lot of drudgery and typing. But some of that would be good work for sub work-bots so personal assisting in the future may be a lot more intelligent work. -JustinCorwin
Justin, on the subject of qualia. Pain is a quale. If someone paints the words "I AM IN PAIN" on a wall, you wouldn't think that the wall was in pain. Does the same go for the remark of a chatbot? A Friendly AI? A human being? Somewhere there is a line, and it's important that we know where it is. -- MitchellPorter
pain is a feedback reaction keyed mostly to the limbic system, which relies on a lot of interrelated sensors, and is mostly useful, but really overloaded in human architecture. That's the untechnical explanation off the top of my head, and it contains no 'qualias'. pain is a fairly simple system to identify and has been isolated in lower animals all the way up to humans.
"Philosophers often use the term "qualia" (singular "quale") to refer to the introspectively accessible, phenomenal aspects of our mental lives. In this standard, broad sense of the term, it is difficult to deny that there are qualia. Disagreement typically centers on which mental states have qualia, whether qualia are intrinsic qualities of their bearers, and how qualia relate to the physical world both inside and outside the head." ([Stanford Encyclopedia of Philosophy])
To put it in my own words: The qualia issue is not about philosophers dressing up to play "ghost in the machine", playing boo amongst the synapses and trying to keep some mystery in the face of awful materialistic neuroscience. Qualia are the subjectively perceptible constituents of consciousness - that's all. Any introspectible sensation or thought counts. They have to be somewhere in your model of reality. What's missing in the functional definition of pain that you give is any reference to the actual feeling of pain. What is the relation between the feeling and the function? --MitchellPorter
subjectively perceptible consituents of conciousness. that would be 'feelings' and 'thoughts' right? What does the concept of qualia give that these two terms don't? Also, these two terms are easy to ascribe characteristics to, and manipulate as concepts. My definition of pain was to, as you challenged earlier, identify what kinds of animals could feel pain. We can't always talk to everything, so the easiest way to determine whether it feels pain or not, is to look for the subsystems and structure I mentioned. Pain will feel different to different people, it seems obvious. The sensory information and limbic reaction filters through whatever associations and memories and beliefs the individual has, as well as the overall brain structure.
The dictionary definition of qualia you cite seems to be the interaction of sensory systems, capacity for introspection, and self-image. I fail to see the benefit in dressing up feelings and perceptions in more complex clothing. 'how qualia relate to the physical world both inside and outside the head'? Is the idea that people live in a world defined and affected by their sensory modalities as well as physical phenomena an issue in philosophy? A debated idea? Now is the time for more exactness, not handwaving philosophy. The concept of qualia is infinitely less useful to me than the smallest meanest study of cognitive science, or evolutionary psychology, or even sociology. Advances in introspection will come from science, not inventing new terms and ascribing importance to them. (whoa, that was harsh, qualia isn't even a new term, but I think i've hammered my point)
I agree that exactness would be desirable. The problem is that we do not know how to be exact about consciousness - "feeling" and "thought" are woefully vague notions - but we do know how to be exact using mathematics, physics, and computer science. So we can be as precise as we care to be in describing the brain as a physical system, but when it comes to relating brain states to states of consciousness, the mapping is extremely crude - again, because we have no comparably precise way to characterize the "intrinsic" or "introspectible" character of states of consciousness. Instead, scientists and philosophers focus on behavior or language or intensity of sensation, because they are relatively easy to describe.
The word "qualia" is not so important. What is important is that the existence of consciousness be acknowledged, along with the recognition that how it relates to the brain is a major open problem. Suppose we accept the idea that animals can feel pain if and only if they have a limbic system. If we want to know whether an artificial intelligence can feel pain or not, we need a better criterion, because an AI will obviously not have a biological limbic system. So we need to identify exactly what it is about a limbic system that is capable of producing a feeling of pain. Any ideas? -- MitchellPorter
Well, your limbic system does a great deal more than give you pain. Also, it's different between species. What I was trying to point out is that you can indentify pain by structure. Something I think you'll be able to duplicate in other forms of life. Pain in our children, such as AI will exist if they're designed to have it. Of course, designing out the hindbrain and anything else you don't want may lead to other unpleasant sensations unique to AI, but they also will be identifiable by structure and experience.
The key to getting exactness into this discussion wont lie in words like conciousness or qualia or even 'limbic system' it'll come from detailed analysis of structure in humans, and other intelligences. We are just a special case, and we don't even know enough about ourselves to go around making decisions, much less generalizing. The reason I object to terms like Qualia is that they encourage philosophizing on a subject not amenable to verbal analysis. An examination of character, nature, or form will not capture new attributes of human mental design, it'll just add more crap to be dispelled. Progress will probably be made based on extrapolation of internal details, and a certain amount of exploration of the idea of general intelligence. The more distance and objectivity available the better.
I realize I'm coming to this discussion late, but it seems to me that it started to fall apart with the mention of qualia. Given that we can't even agree on what it is, it seems a bit pointless to argue over it. Just try to stick to terms you know, everyone.
-- GordonWorley
I'm not sure it 'started to fall apart' but it did detach from the NewHumaneRights list initially put up. I remain unconvinced that High Purpose is a neccesary condition of moral creation, but the discussion seems to have explored my objections, and other's reactions to them. I can only add that while my personal sense of high purpose may be the best I can imagine, it is not the best possible. So ensuring some genetic transmission of purpose between myself and all my children ensures only that my/our potential is glass ceilinged. Additionally, I recognize FAI as an idealistic high minded thing to do, but it does not suit my personal sense of high purpose, possibly because I'm not an altruist, or because I have nonstandard moral structure.
In that case it seems that you should not be involved with the creation for FAI (although I don't think anyone was accusing you of involvement). -- GordonWorley
I see the creation of FAI as very very beneficial, and I plan to contribute and help wherever I can. Whether or not my reasons are philsophically acceptable to be involved strikes me as unimportant. --JustinCorwin
Aphoristic version of New Humane Rights I have rendered these rights in my own aphoristic versions below. Although I recognize that these aphorisms lack the precision of the original statements, I believe they make up for this deficiency by being easy to understand and to remember.
1. You have the right not to have the spread in your volition optimized away by an external decision process acting on unshared moral premises. Only you can choose what you want to want.
2. You have the right to a system of moral dynamics complicated enough that you can only work it out by discussing it with other people who share most of it. You can only decide what is right with a little help from your friends.
3. You have the right to be created by a creator acting under what that creator regards as a high purpose. You exist for the best reasons.
4. You have the right to exist predominantly in regions where you are having fun. Most of your time will be mostly fun.
5. You have the right to be noticeably unique within a local world. Only you can be you around here.
6. You have the right to an angel. If you do not know how to build an angel, one will be appointed for you. Help will always be available.
7. You have the right to exist within a linearly unfolding time in which your subjective future coincides with your decision-theoretical future. Your future will unfold in the most positive way you can imagine.
8. You have the right to remain cryptic. Your privacy is guaranteed.