From: Eliezer S. Yudkowsky (email@example.com)
Date: Fri Jan 28 2005 - 09:02:39 MST
Russell Wallace wrote:
> On Mon, 24 Jan 2005 21:44:13 -0500, Ben Goertzel <firstname.lastname@example.org> wrote:
>>IMO, this is foolish.
>>It is obvious that a GP-based optimizer running on less than a million PC's
>>(to be conservative) is not gonna take-off, transcend, become self-aware,
>>It's obvious that Cyc, SOAR and EURISKO are not going to do so.
> I agree, these things are obvious. But still, perhaps there's
> something to be said for the idea of getting into the habit of
> thinking safety-first from day one, so as to be already in it if and
> when one later gets to the point of having something that _could_ take
> off etc.
"Obvious" simply DOES NOT COUNT on the scientific frontier. *Any*
scientific frontier, not just those on which lives depend. Nature has this
disconcerting habit of coming back and saying "So what?"
And no, this is not the Ineluctable Hazard of Reason which applies equally
to all thoughts and may therefore be ignored. Nature says "So what?"
significantly less often to some kinds of thinking than others, and if you
look closely enough at scientific history, you start to see a pattern.
Nature sometimes but very rarely says "So what?" to quantitative
calculations based on a confirmed model of a specific phenomenon. On the
other hand, when it comes to passionate verbal argument about the
entanglement of morality with some poorly understood outcome, Nature says
"So what?" so often you'd think She wasn't even listening. "Obvious" in a
case where you have no clue how to perform a quantitative calculation falls
somewhere between those two cases. If it's a yes-or-no question on a
poorly understood scientific frontier, the obvious answer is probably
correct, oh, say, no more than 2 times out of 3. If it's not a yes-or-no
question, forget it, you're screwed without a technical model.
I cannot calculate how much hardware, quantitatively, it takes to run an
intelligence which is effectively transhuman in the sense of being able to
beat up humanity and take away our lunch money. I guess that the answer to
the yes-or-no question, "More or less hardware than a modern personal
computer?" is "Less." I cannot calculate how much cumulative optimization
pressure it takes to produce something dangerous. I guess that the answer
to the yes-or-no question, "More or less cumulative optimization pressure
than modern GA-based systems with a few design improvements running for a
year on expensive hardware?" is "More". But I am not ABSOLUTELY SURE of
either answer. That is not just dutiful uncertainty to impress others with
my rationality; I anticipate that the outcome might really go either way,
and I do my best to plan for both cases.
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:50 MDT