From: H C (lphege@hotmail.com)
Date: Mon Aug 22 2005 - 18:53:02 MDT
>The stuff I've had time to read on the SIAI website appears to be fairly
>high-level; they've certainly solved some interesting problems before
>they've even arisen, using some clever reasoning. However, I disagree with
>the statement that "...the search for a single essence of intelligence lies
>at the center of AI's previous failures". I believe the problem is quite
>the opposite; we have not yet looked hard enough at the overall problem to
>realise that the most complex problem (as we see it) is actually the
>simplest - that of how to integrate emotion into an artificial mind.
>
>My thinkings have largely focussed upon this problem, with some startling
>conclusions. I believe that emotion is a fundamental component of
>intelligence, that the two are inextricably linked, and that it may even be
>the case that they cannot exist without each other. I won't go into too
>much detail (just in case I'm emailing a black hole here), but I believe
>that a simple system, constructed around some very basic principles (I
>refer to it as emotional mechanics) can be emergent, and that the emerging
>properties and behaviours are what we would classify as intelligence.
Sounds promising to me, but I think Michael Wilson would probably vomit all
over himself before agreeing with me on that point.
>From: Chris Paget <ivegotta@tombom.co.uk>
>Reply-To: sl4@sl4.org
>To: sl4@sl4.org
>Subject: [JOIN] Chris Paget
>Date: Tue, 23 Aug 2005 01:34:16 +0100
>
>Hi all,
>
>I've been pointed here by a couple of folks over at SIAI, since I have some
>ideas about AGI - somewhat controversial ones by all counts. So, by way of
>a join post, I'll copy and paste the relevant bits from the SI list...
>
>
>A quick disclaimer: Today was the first time I even heard the phrase
>"technological singularity", although I've been exploring the idea for some
>time now. I'm definitely not "up" on the terminology, so apologies in
>advance for any misnomers :)
>
>Not-quite-first, an introduction. My name is Chris Paget, I'm a
>27-year-old security consultant who lives and works in London. I'm married
>(for almost a year now) to an american, Erin, and we're in the process of
>applying for a spousal visa so we can move back to Pennsylvania (where Erin
>is from). I've been programming since I was 3 years old, and a penetration
>tester for about 4 years now, the last 3 of which have been with NGS
>Software; if you're into security you will have probably heard of me from
>security.tombom.co.uk/shatter.html, which I published in August 2002. I'll
>mention that I studied Computer Science at Cambridge Uni (here in the UK)
>but if you want the rest of my (probably rather boring) history feel free
>to ask me :)
>
>Now to the interesting stuff. The idea that I have been working on
>apparently fits into the Wikipedia definition of "strong" AI - I know very
>little about neural nets, genetic algorithms and their ilk, and don't
>really want to know - it seems like a blind alley to me. Too much research
>time is, IMHO, currently spent on AI as a system of complexity; it seems
>like the current thinking is that if you add sufficient complexity to a
>system it can "appear" intelligent, maybe even enough to pass a turing
>test. I believe this is wrong; my ideas are focussed around intelligence
>as an emergent system based on some relatively simple rules.
>
>The stuff I've had time to read on the SIAI website appears to be fairly
>high-level; they've certainly solved some interesting problems before
>they've even arisen, using some clever reasoning. However, I disagree with
>the statement that "...the search for a single essence of intelligence lies
>at the center of AI's previous failures". I believe the problem is quite
>the opposite; we have not yet looked hard enough at the overall problem to
>realise that the most complex problem (as we see it) is actually the
>simplest - that of how to integrate emotion into an artificial mind.
>
>My thinkings have largely focussed upon this problem, with some startling
>conclusions. I believe that emotion is a fundamental component of
>intelligence, that the two are inextricably linked, and that it may even be
>the case that they cannot exist without each other. I won't go into too
>much detail (just in case I'm emailing a black hole here), but I believe
>that a simple system, constructed around some very basic principles (I
>refer to it as emotional mechanics) can be emergent, and that the emerging
>properties and behaviours are what we would classify as intelligence.
>
>I'm looking at the problem from a much lower level than what I've read on
>the SIAI site; they've solved a lot of the high-level problems I've thought
>of (and many that I hadn't), but I'm more interested in the guts of the
>system - actually translating it into code that can be written. I'm on the
>verge of a simple implementation, essentially integrating emotions into an
>intelligent tic-tac-toe system. I haven't gotten further than the planning
>yet, but it's already proving an interesting exercise.
>
>Anyway, I think I've rambled enough for what was intended to be a brief
>introduction, so I'll stop here. Please feel free to ask for more detail
>on anything I've mentioned in this mail; I'm happy to discuss anything. Be
>warned though, if you do wish to ask about my visa application the response
>you get may be somewhat tedious and boring - US visas are not fun to apply
>for :(
>
>Cheers,
>
>Chris
>
>--
>Chris Paget
>ivegotta@tombom.co.uk
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT