RE: Finding for SIAI

From: Ben Goertzel (ben@goertzel.org)
Date: Sun May 19 2002 - 21:12:36 MDT


yo,

> > a) code a prototype DGI AGI system yourself, to show off
>
> I guess it's just *inherently implausible* that AGI is too large for one
> person to build a prototype that does anything convincingly powerful...

Hell no, my own AGI designs are complicated enuf that it would take me
*many* years to build a convincing prototype on my own.... Thus I never
took this route

> Uh... no offense or anything, but I tend to view this entire line of
> argument as "Ben tries to frame the problem so that all the funding
> automatically goes to Novamente."

Dude, there is a LOT of money in the world and Novamente cannot absorb a
very large percentage of it.

You and I are very far from being in a zero sum game as regards research
funding

I actually WANT you, and Peter, and others building AGI to get as much
funding as possible. Sure, it's *conceivable* we could someday be in the
position of competing for the same funding source, but it's not the case
right now, and doesn't seem so likely to happen in the near future.

As you know, I've tried to interest some funding sources in my circle in
SIAI, but without success.

> I'll go on creating the DGI design
> because that's on the direct critical path to AI; this is also one of many
> things that increases the probability of obtaining funding.

Writing up your design in a clear and concrete way so that a
philanthropist's "comp sci expert" buddy can understand it, will definitely
increases the probability of your getting funding

The same can be said for me: a clearer and more concrete writeup of my AGI
design will enhance my chances of getting pure research funding as opposed
to commercial application funding. And as you saw from the Novamente
manuscript, I am definitely not there yet, my own exposition of my design
needs a lot of work...

> DGI is less than a design paper, but it is more than the philosophy paper
> you are trying to paint it as. I'm not quite sure you understood the
> non-philosophy parts, though. You may have thought I was talking about
> philosophical-ish emergent behaviors when I was talking about design
> properties with design consequences.

In calling your paper philosophical, I was using the term "philosophy"
loosely. In the same way as I'd call my book "Chaotic Logic" a
philosophical work even though there are significant scientific,
mathematical, and software design sections (some of them much more explicit
regarding software design than anything in the DGI paper).

I think I may have a higher opinion of philosophy as it relates to AI, than
you do. My calling something "philosophy" is not intended as derogatory in
any way...

I understand the DGI paper, in parts, was talking about design properties
with potential design consequences -- but the design consequences were not
drawn out very clearly or explicitly. The paper is already very long and I
don't think that concrete design consequences need to be in the paper, of
course. But they definitely need to be spelled out more clearly if non-Eli
individuals are to understand them!

Ben G



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT