Re: OpenCog Concerns

From: Matt Mahoney (matmahoney@yahoo.com)
Date: Fri Feb 29 2008 - 19:46:17 MST


--- Ben Goertzel <ben@goertzel.org> wrote:

> On Fri, Feb 29, 2008 at 11:24 AM, Matt Mahoney <matmahoney@yahoo.com> wrote:
> > --- Gordon Worley <redbird@mac.com> wrote:
> >
> > > I just learned about OpenCog, and I'm concerned about the safety of it.
> > >
> > > http://www.opencog.org/
> >
> > Ben Goertzel is at least aware of the problem (unlike most AI
> researchers).
> > However, OpenCog lacks a plan to acquire the resources (both computer and
> > human) to grow very big.
>
> The plan to acquire human resources for OpenCog is simple: think Linux...
>
> 1) An army of volunteers with various sorts of expertise
>
> 2) Once the utility of the OpenCog system for various practical purposes is
> demonstrated, large companies may devote resources to it, just as IBM and
> many others have done for Linux
>
> Regarding compute resources, one approach that has some potential is
> OpenCog@Home ... massive P2P distribution can take care of some but not
> all essential cognitive operations. However, other than that, I do have
> faith
> that once sufficiently impressive Ai capability is shown, resources can be
> found
> to rent computer time from available compute clouds.
>
> Also, subject to licensing terms,
> OpenCog could be used directly by commercial or government entities,
> which may have their own funding for hardware.

I mean how do you get the vast majority of people who aren't programmers and
who aren't interested in advancing AI to contribute? Every time I search
Google or post to a blog or mailing list I am producing information that could
be useful to an AI. How do you capture this? How do you get ordinary users
to contribute information and rate the quality of existing information? What
does your software do for them that they want to use it?

> I really think hardware is not the problem.

I think if you have a few billion users, it is.

> > Even if it is successful, it requires centralized
> > control over resources to ensure that agents cooperate. The human owner
> is
> > responsible for acquiring these resources, which makes it expensive.
>
> I don't understand what you mean by the above. If you mean that there is
> some
> centralized control of cognition (as in the human brain) that is certainly
> true.

I mean centralized control over access rights. From the overview at
http://www.agiri.org/OpenCog_AGI-08.pdf (is there a more detailed document?) I
don't see a mechanism for dealing with rogue or malicious agents or spammers.
Perhaps I misunderstand the architecture? What is its maximum size?

-- Matt Mahoney, matmahoney@yahoo.com



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT