From: Christian L. (n95lundc@hotmail.com)
Date: Tue Mar 27 2001 - 16:13:04 MST
Dale Johnstone wrote:
>
>Christian L. wrote:
> >When I first started subscribing to this mailing list, I thought that the
> >goal of SingInst was to build a transhuman AI. I was wrong. The goal is
> >obviously to build a Utopia where Evil as defined by the members of the
>list
> >will be banished. The AI would be a means to that end, a Santa-machine
>who
> >uses his intelligence to serve mankind.
>
>I can't speak for SingInst but as someone who's working towards the same
>goal, I'm in it because I see a possible way to do away with death and
>misery once and for all. Although I'm obsessed with AI, I'd switch to
>collecting detergent coupons if that would do any good. Unfortunately it
>doesn't. Building a transhuman AI though does.
>
It might, yes. I agree.
>List members do *not* get to define what is evil and what is banished.
Oops. This has already been done:
"To eliminate all INVOLUNTARY pain, death, coercion, and stupidity from
the Universe."
>Basically, the idea is to build a really smart mind, and help it to
>*understand* us and our common desire for a better world in the same way
>that we do - not by some rigid laws cast in stone, but by thoroughly
>understanding the subtleties and details. If we've done our job correctly,
>it will eventually understand this even better that we do. Think of it as
>raising a child if you prefer.
>
Even if it understands us and our desires, I don't see why it would
automatically become our servant. We might understand the desires of an
ant-colony, but if we need the place, we remove the anthill.
>Okay, next step is to have it create an even better version of itself -
>actually it will want to do this all by itself because it's such a good
>idea. It's up to it to choose how to do this, but because it's Friendly and
>smarter than we are; it'll make its successor (or a modified version of
>itself) Friendly too. The mark2 version will also do the same, only better.
>This is what I refer to as the Friendliness attractor. Each successive
>generation is better able to understand us and better able to help us.
>
>Okay, now for the SysOp idea. This is what Eliezer and many on the list
>think the AI will come up with in order to be as Friendly and as fair to us
>as possible. We can't say for sure, but it's the current favourite. We
>don't get to decide this, the AI does. If the AI eventually thinks of
>something better then that's what'll happen.
>
My point exactly. I can think of a lot of better things for it to do than
serving us. Again, I will try to avoid debating Friendliness until I have
read FAI.
> >If you organize yourself in a "pack" and follow the rules set up there,
>you
> >can get personal protection and greater means of achieving your goals
>(they
> >normally coincide with those of the pack). When you interact with another
> >pack-member, you can be pretty sure that he/she will not break the rules
>and
> >risk exclusion from the pack. This can be called trust. The rules that
>the
> >pack sets up can be called ethics.
>
>This is all well and fine in the jungle when about the worst I could do was
>hit you with a stick. However, pretty soon nanotechnology is going to
>become readily available and it's practically impossible to defend yourself
>against it.
If any single individual has the ability to turn the crust into bubbling
slag - you can bet your life some crazy nut will do it, either deliberately
or by accident.
>
Probably, yes. That's why it is important to get on with the AI-programming.
>The world by and large hasn't woke up to the facts yet. It's clear that
>things aren't going to get any better by themselves. I hope you can now
>understand the urgency in our desire to apply a little transhuman
>intelligence to the problem.
>
I assure you, I did understand it before. I just don't see the point in idle
speculation about the actions of eventual SIs. It will do as it pleases. If
we manage to program it into Friendliness, it will be Friendly. Maybe it
will ignore humans. Maybe it will kill us. I don't know.
My interests lie in getting to the Singularity. After that, the SI is
calling the shots. I don't think that you can plan ahead beyond the
singularity, and I certainly am not going to. You can do your best in trying
to program a Friendly AI, but in the end, the AI will be in charge.
Only time will tell what happens after that.
/Christian
_________________________________________________________________________
Get Your Private, Free E-mail from MSN Hotmail at http://www.hotmail.com.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT