Re: [sl4] Re: More silly but friendly ideas

From: Stathis Papaioannou (
Date: Sat Jun 28 2008 - 05:46:36 MDT

2008/6/28 Stuart Armstrong <>:

> I don't actually. I thought I did initially, but then when I analysed
> it, the whole thing fell apart. Goals seem to be the opposite of
> axioms; they are the end point, not the beggining of the processes. An
> AI with a goal X will be building a sequence of logical steps that end
> up with X, then compare this with other sequences with similar
> consequences; this is the reverse construction to an axiom.

An axiom is a proposition that is assumed to be true, not dependent on
the truth of other propositions. Similarly, if an intelligent agent
has a top goal, it is simply taken by the agent to be valid without
further justification.

Stathis Papaioannou

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT