From: Matt Mahoney (matmahoney@yahoo.com)
Date: Fri Oct 23 2009 - 12:13:30 MDT
> Why do you make up stories about why the problem is so hard?
Because friendliness *is* hard. We have a $1 quadrillion incentive to build AI, so we are going to build it whether we solve the friendliness problem or not. By default, AI is going to be designed to serve those who built it.
-- Matt Mahoney, matmahoney@yahoo.com
----- Original Message ----
From: Tim Freeman <tim@fungible.com>
To: sl4@sl4.org
Sent: Thu, October 22, 2009 10:24:16 PM
Subject: Re: How big is an FAI solution? (was Re: [sl4] to-do list for strong, nice AI)
From: tim@fungible.com (Tim Freeman)
>The PDF file at http://www.fungible.com/respect/talk-mar-2009.pdf is
><1MB. A real definition would include training data that would
>probably be a few GB's. Start reading at
>http://www.fungible.com/respect/index.html.
From: Matt Mahoney <matmahoney@yahoo.com>
>Training an AI by watching people will only define "human" as far as
>the cultural beliefs of the people it observes.
That might be true for some AI, but it's not a criticism of the
algorithm I'm pointing to. Training that AI more than "watching
people". It's given video that's annotated to say who is present and
what they are doing and what they are perceiving, so it's the beliefs
of the people annotating the video that matter, not the cultural
beliefs of the people appearing in the video. (The prior probability
distribution also matters, of course, but that's fixed.)
>It will fail utterly in the case of future human-software hybrids that
>don't yet exist.
Well, it will say *something*. Whether that's failing utterly depends
on what you want and what it does, and I don't know either of those.
This is all complicated by the fact that some of these future
human-software hybrids will surely be insane by our standards, so
perhaps we don't want the AI caring much what they want.
A human-software hybrid is likely to be as powerful as an AI and not
Friendly-by-construction. Humans are sufficiently mentally broken
that I wouldn't expect it to be Friendly-by-good-luck either. If the
Friendly AI problem is worth solving, then maybe we don't want any
human-software hybrids until we have an FAI that seems competent and
claims it's safe to do.
>Humans can recognize faces in videos without much effort. That doesn't
>make it easy. In fact the training data that allows you to do this
>consists of, among other things, several years worth of high
>resolution video.
I agree about the time period that people actually observe. I
wouldn't call human vision high-resolution, but that's minor.
Although people get that much video, I don't agree that they need it
to do facial recognition. There are programs that do facial
recognition and they must not get that much training data.
>Landauer measured the complexity of human long term episodic memory to
>be on the order of 10^9 bits.
I don't dispute that estimate. I do dispute the claim that it's
relevant. The only way it can be relevant is if understanding who is
present and what they are doing and what they are perceiving is a
significant portion of what humans do with all that long term memory.
For example, the learning that has to happen from the training data
includes nothing about planning or motivation or domain knowledge
outside of this specific vision problem. It includes nothing I
learned in school and nothing I learned after age 8 or so.
Why do you make up stories about why the problem is so hard? Do you
perhaps want people to give up? Do you think you gain status by
spouting marginally relevant large numbers? Something else?
-- Tim Freeman http://www.fungible.com tim@fungible.com
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:05 MDT