the ontological problem of Friendly AI (was Re: [sl4] Victims of SL4)

From: Mitchell Porter (
Date: Thu Oct 09 2008 - 21:44:11 MDT

This whole business of breaking the fourth wall is amusing - for another (badly scanned) example, see

- but it's also a reminder of the chaos of views about mind and reality which exist in our circles. For example, I would not be surprised to discover that there are SL4 readers who sincerely believe, on the basis of some personal reasoning about information and existence, that Roger and his torturer genuinely existed and had experiences while they were being read about, and existed only *because* they were being read about. After all, it is not that far from the thesis that a simulated consciousness is a consciousness, to the idea that an imagined consciousness actually exists.

The situation is exactly like that with respect to religion. The world's religions cannot all be right, as they are mutually inconsistent, and most scientifically minded people suppose that none of them are right. It is the same with respect to these diverse, and often half-thought-through, personal philosophies of mind and reality. They cannot all be right, and it may be that *none* of them are right. Just think: you, yes you, may be dwelling in ignominious delusion, with beliefs that are *just as wrong* as are the beliefs of the people *you* think are dwelling in benighted philosophical cults, their backs turned on reality. And what's worse, it might not just be you, it may be your whole intellectual milieu - your subculture of choice, or even your historical epoch.

A few further points.

First: If a person proposes to create a mind smarter than human, they really ought to be able to gaze into this chaos, critique it thoroughly and expertly, and say what the right ideas about mind and reality are, and explain why they are the right ones. If they can't do that, then they may or may not succeed in their attempt to create superintelligence, but clearly even if they do succeed, it will have been without really knowing what they were doing. If you can't even get the qualitative nature of mind correct, how can you possibly know what you're doing in the quantitative, technical sphere? You may be right on questions that are strictly internal to some technical domain - just as a person could be a master number theorist while being quite clueless or just plain wrong about some nonmathematical topic - but if you can't take a stand in the philosophical chaos over mind and reality, and explain why that stand is correct, you're clearly missing something. I realize that there are people who get off on the mysterious nature of reality - it makes life an intellectual adventure, and all that - but I would think that such an attitude is inconsistent with the philosophy of Friendly AI, in particular. FAI is about being responsible, in fact hyper-responsible (compared to the way that life is ordinarily lived), regarding the act of creating AI. When one possible outcome is the destruction of the world, you don't just give it a spin and hope for the best. You do it because you know what you are doing and actually have reason to think you're not going to destroy the world, or you don't do it at all. *The same principle applies to the "philosophical" aspects of AI.* If you are even *possibly* getting the philosophy wrong, you are running unknown risks.

Second: There can be no presumption that common sense is correct, just because it's common sense. Personally I am prepared to reject most of the brave new ideas about the nature of reality which pop up regularly in transhumanist circles, and even most of the ideas about the nature of the mind which orthodox, scientifically minded philosophers wish to advance... but how far do you go in allowing your "intuitions" to justify rejection of the dogmas of the moment? People reject relativity and evolution on the same basis. So ultimately one has to face up to that tiresome philosophical issue of radical skepticism. In principle, you must be able to justify *everything*, or at least justify why you don't need to justify this or that particular belief.

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT