From: Fabio Mascarenhas (firstname.lastname@example.org)
Date: Fri Apr 20 2001 - 22:59:03 MDT
The note below appeared today in ACM Technews, an electronic newsflash
received by most ACM (Association for Computing Machinery) members. While
it's based on the Wired article, it introduces some information not present
there. Very good, in my opinion.
About the Alon Halevy's statement (is he the same "well-known researcher"
cited in the Wired article?), obviously he hasn't read "Friendly AI" nor has
been involved in the discussions a couple months ago. A "friendly"
(lowercase f) AI isn't the same as a "Friendly" (uppercase f), that's a
point Eliezer tries to convey, oh well... at least I can simpathize with
him, not everyone has the time to thoroughly read a 750K essay, specially
when it goes against his opinion as an expert in the field.
"Making HAL Your Pal"
Wired News (04/19/01); McCullagh, Declan
A small band of futurists are predicting that computers will one day design
themselves rather than be designed by humans. Eliezer Yudkowsky and his
compatriots at the Singularity Institute argue that humanity needs to design
frameworks to ensure our safety once technological innovation culminates in
a computing epiphany--a point they call "Singularity." In his treatise
"Friendly AI," released this week, Yudkowsky gives advice he has mulled over
for the past 10 years, including policy and design principles that should
help set the stage for friendly artificial intelligence. However, academics
are not impressed with the Singularity Institute's predictions. Alon Halevy
of the University of Washington's computer science department says creating
AI is such a large task that once it is completed, factoring in benevolence
will be a relatively easy matter. He says, "The challenges we face are so
enormous to even get to the point where we can call a system reasonably
intelligent, that whether they are friendly or not will bill an issue that
is relatively easy to solve." Still, Yudkowsky says AI is on the way and
guidelines need to be set now. He says, "The Singularity Institute is not
just in the business of predicting it, but creating it and reacting to it.
If AI doesn't come for another 50 years, then one way of looking at it would
be that we have 50 years to plan in advance."
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT