From: Keith Henson (hkhenson@rogers.com)
Date: Thu Aug 24 2006 - 19:18:08 MDT
At 01:52 PM 8/24/2006 -0400, you wrote:
>Keith Henson wrote:
>>Manhattan took three approaches to building a bomb. Two of them worked
>>and the third was known before they started that it would not work. I am
>>not entirely sure why it was even tried.
>>Apollo took one approach, Lunar Orbit Rendezvous and that worked. In
>>both cases the physics was understood, it was mostly a project of
>>engineering the hardware.
>>The difference with an AI to Singularity project is that--as far as I
>>know--nobody knows how to do it. (Correct me if I am wrong here.)
>>I notice that a burn rate for money is proposed, not a total to complete
>>the task.
>>The only way I feel confident would work is to duplicate the functions of
>>a human brain. At some level of fidelity you would get human level
>>intelligence (along with less desirable features). But it is not obvious
>>to me how such as being would reach into itself to make improvements any
>>more than we can reach into our brains and tweek them.
>>Keith Henson
>
>I think that some people do have some clear ideas about how to do it.
>
>With some exceptions, they have no serious funding.
The two may be connected. A clear ideas that you can explain to someone
with money will probably get you funded.
>None of them (with the possible exception of Hugo de Garis) believe that
>the way to do it is to emulate a brain.
I have scoped this out from time to time. The last time I did it I think
it took 150 meters of silicon on a side. It is going to take considerable
time before the technology is up to the task.
>Please bear in mind that many aspects of how to do it are not discussed on
>this list.
Very little of my knowledge about the subject comes from this list.
>Whatever the purpose of SL4 might be, any attempt to talk about real
>technical issues gets immediately swamped by low-quality noise and
>vitriol...... as a result of this you might think that there is no serious
>technical work in existence (about the specific problem of building a
>coordinated, complete AGI, including all the motivational/moral/ethical
>aspects, and including considerations of safety and friendliness).
>
>This impression would be a mistake. To take just the issue of
>friendliness, for example: there are approaches to this problem that are
>powerful and viable, but because the list core does not agree with them,
>you might think that they are not feasible, or outright dangerous and
>irresponsible. This impression is a result of skewed opinions here, not
>necessarily a reflection of the actual status of those approaches.
I remember a business plan a friend of mine wrote. It's only function, as
it turned out, was to bring it out every few years and laugh at it.
I wonder if the AIs of the future will do the same reading our exchanges?
Keith Henson
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:57 MDT