Re: Posting to this list (was: Why friendly AI (FAI) won't work)

From: Harry Chesley (chesley@acm.org)
Date: Thu Nov 29 2007 - 09:43:39 MST


Let me back off a level. I really don't mean to be arrogant. Indeed, I
have a very great sense that I know next to nothing about intelligence,
AI, singularities, or how to implement any of the above. It's just that
there seems to be so much arrogance on this list that it's hard not to
let something similar creep into my replies. Of course, arrogance
doesn't mean that the person sporting it isn't right -- in my
experience, there's little correlation in either direction. I apologize.

I am working on AI and came to this list looking for discussion and
feedback on some issues like the morality of experimenting on AIs and
the need to incorporate FAI principles. These issues seemed well within
the stated purpose of the list, so I didn't believe I needed to stop and
study everything that a particular group has done before posting. But
it's not my list, so I'm quite happy to look elsewhere for an
appropriate discussion forum if these sorts of questions are not welcome
here.

Just as a reminder, the stated purpose of the list is: "The SL4 mailing
list is a refuge for discussion of advanced topics in transhumanism and
the Singularity, including but not limited to topics such as Friendly
AI, strategies for handling the emergence of ultra-powerful
technologies, handling existential risks (planetary risks), strategies
to accelerate the Singularity or protect its integrity, avoiding the
military use of nanotechnology and grey goo accidents, methods of human
intelligence enhancement, self-improving Artificial Intelligence,
contemporary AI projects that are explicitly trying for genuine
Artificial Intelligence or even a Singularity, rapid Singularities
versus slow Singularities, Singularitarian activism, and more."

As to the specific topic at hand, I've read about FAI to various depths
for some time, though not enough to be anything vaguely close to an
expert. The arguments presented have not convinced me that it's a viable
option. But I could easily be wrong, so I posted my reasons, looking for
convincing counter-arguments. Which I haven't seen yet. So I'm
continuing on with my original believe set.

I'm surprised that if you really believe that FAI is essential to the
future of the human race, you don't try to evangelize it and patiently
explain it to newbies. You'll get a lot more converts that way that
arrogantly telling anyone who doesn't agree with you that they don't
know what they're talking about and obviously haven't read the
literature or they would agree with you.

But I wouldn't worry about me creating a non-friendly AI. There are many
other groups better funded and with smarter people. Right now, I'd worry
about Google. (I know, I'm not the first to suggest that.)

On 11/29/2007 4:24 AM, M T wrote:
> I'm sure I'll regret interfering as usually in this list, but......
>
> Some humbleness is necessary while on this list Harry.
> Or while on any highly focused expert list, for that matter.
> Until you can consider yourself an expert on the field.
> Then you can attempt arrogance, but now you're a moment too soon.
>
> Whatever you are working on regarding AI (or is it AGI?), don't think you can transfer your expertise to FAGI straight away.
> People in the SIAI are pretty much the only group that has been focusing on FAGI for any considerable amount of time.
>
> What makes you think you can do it better right of the bat?
>
>
>
>
>
> ----- Original Message ----
> From: Harry Chesley <chesley@acm.org>
> To: sl4@sl4.org
> Sent: Thursday, 29 November, 2007 4:26:25 AM
> Subject: Re: Why friendly AI (FAI) won't work
>
> Thomas McCabe wrote:
>
>> What do you think SIAI is for? To develop an AGI which *is*
>> well-defined and well-built, before some random research lab or
>> garage kills us all.
>>
>
> Good luck with that. But, frankly, judging from the posts on this list,
> you sounds like exactly the sort of arrogant people who believe they
> know the answer and no one else does who I especially think should
> *not*
> control the singularity. Hmmm, maybe I should get back to work.
>
>
>> For the love of <whatever deity you do or do not believe in>, stop
>> working until you get a clear idea of what you've gotten yourself
>> into.
>>
>
> No thanks.
>
>
>
>
>
>
>
>
> ___________________________________________________________
> Yahoo! Answers - Got a question? Someone out there knows the answer. Try it
> now.
> http://uk.answers.yahoo.com/
>
>



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:01 MDT