Re: Volitional Morality and Action Judgement

From: Michael Wilson (
Date: Wed Jun 02 2004 - 08:33:37 MDT

Mark Waser wrote:
> I'm seeing a lot of debate between Eliezer and Ben where it's
> devolved to the point where Eliezer is no longer willing to
> fully engage with valid points.

Ben is stubbornly holding to some fundamentally flawed ideas. There's
no point repeating debates that have already occured several times if
neither party is going to budge. Don't get fixated on Ben. You can
watch Eliezer having amusing debates with other people too.

> I've noticed a lot of people asking for clarification where they
> are going wrong and not receiving that clarification.

The world slides inexorably closer to destruction every day. Seed AI
programmers and funding are desperately needed. The really good
people all seem to grasp the basics fine and proceed to argue about
technical details off list. This takes time, but it's worth responding
because they're more likely to be useful. Meanwhile, few of the
people wanting answers on SL4 seem to have large pocketbooks waiting
to be opened. Given this situation, arguing about stuff on SL4 is fun
but having the SIAI people spend a lot of time here isn't condusive
to saving the world. Thus please forgive Eliezer, Tyler etc for not
answering every question.

> I also think that there are a number of missed opportunities
> both in the theory and in the attempt at spreading the meme.

Spreading the meme is a risky thing to do, but talk to Michael
Anissimov if you have a cool idea about it.

> Change in roles? I'm not sure what you mean. I don't want/expect
> Eliezer to change roles. I would like to see him work more
> collaboratively.

That means supplying Eliezer with someone to work with who would
actually make him more productive rather than less. The SIAI is
certainly looking, but I don't know of any candidates yet.

>>> Relying upon a single point of failure (meaning both a single FAI
>>> and a single you) is incredibly foolish.
>> This is the best strategy we have for now.

I didn't say it's my preffered strategy, it's the only strategy we
have other than funding some utterly random people to work on it.

> 1. Why do you believe that a single FAI is the best strategy?

It isn't, I just haven't seen anyone else qualified to do serious
work on it. There are a couple of people who might have a good
chance if they devoted their lives to it, but they're doing other
critical stuff. And if it's damn hard to find seed AI programmers,
it's more or less impossible to find FAI researchers.

> 2. Why do you believe that relying on Eliezer and only Eliezer
> is a good strategy.

Ditto. Lack of competition.

> I signed up on the seedaiwannabes list when I first joined SL4
> quite some time ago...

Someone will presumably do an assessment of you sometime soonish

> "I'm much too busy to discuss it or even write it up for anybody"

The SIAI is composed of sane people who understand risk management
and sensible precautions have been taken as far as possible.
Furthermore as I'm sure you can imagine considerable discussion goes
on appart from the SL4 mailing list (which I hesitate to refer to as
the peanut gallery :) ).

 * Michael Wilson


Yahoo! Messenger - Communicate instantly..."Ping"
your friends today! Download Messenger Now

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT